Have the AI replace the cloud point mesh with a parametric model. Done. As expensive as Dessault Systems and AutoDesk products are, I feel like should have already happened by now.
Use photogrammetry apps, will be much better results like luma. But with a mobile phone, if you do not know how, the model will always be a bit unclear.
Sadly, genAI developers would rather try to develop a meh way of copying & replacing artists, than something actually useful to us. Where's my auto UV unwrap? Wheres my tool for seamless textures? Where's my tool for retopologising scultps?
I loved the entire video! How you presented each different tool had me sitting on the edge of my seat, both hoping for an elegant success or a total disaster. So well organized, presented and edited. Looking forward to your next video on this rapidly developing topic! -Courtney
Hey Coutney, thanks very much. Really glad you enjoyed it, always nice to hear from a fellow creator! Get in touch if you're ever interested in telling a Filament Story together, Happy Printing!
With llm's you should have used follow up prompts, for example "the elements are there but not in the correct position" or "the script generates an error because there is no cone object"
Yeah I actually spent a couple of days trying this with several LLMs and was absolutely trying feedback questions to iteratively improve the code. However, I found this had very little impact on the positioning and orientation of the models. So little in fact it wasn't even worth showing in the video when I did the final test between each of the LLMs.
I would definitely be interested in follow up videos with the newer models. This would a be a serious space of innovation as the LLM's are more properly trained on such to perform better.
Meshy’s text to 3D generator didn’t work well for me so I tried using Copilot Designer’s text to image first (prompting for a 3d image) and then ran the image through Meshy’s image to 3D generator. That worked pretty well. Well enough for a prototype.
@@3DRevolution Almost everything you said has been made obsolete over the last 2 days. Someone figured out a new prompting technique to give LLMs spatial awareness. And someone else figured out a way to use knowledge graphs to give LLMs much more intuition, and vastly more robust prompt interpretation.
My experience with chatgpt & openSCAD was very similar to yours. After several rounds of corrections, the LLM failed to grasp what I was after, and I ended up writing it from scratch. I tried to be as precise as possible in my descriptions of each geometric part for the LLM, but it didn't get the job done. So it's interesting but not useful. At best, it can rapidly create an almost-correct code that you have to tinker with manually.
Thank you for all your experimentation and explanations - what an exciting time to be involved with 3D Printing, scanning and AI assistance. Keep up the great work!
Been trying this too, Apps that understand "real world" STL output like OpenSCAD, FreeCAD, Blender are going to be very interesting once AI get integrated properly. I suspect the AI/LLMs are going to be more specialized and perhaps simpler than current LMMs that try to do everything. Makehuman understands the kinematics of body shapes, the AI only needs to modify the variables that are used by Makehuman to deign the body and pose. The Makehuman plugin to Blender does the same. AI with OpenSCAD for robot design will be interesting, I use it for model trains, wheels, motors and gears but robots also use motors n gears....
@@3DRevolution Breaking Taps did vid on emulated biology joints for tiny robots. Add some AI procedural geodesic bones and humanoid bots are much stronger and lighter. Most bots today are over engineered subtractive machined and ugly.
The exponential growth of AI is cool to watch. In 18 months it is going to be an awesome usable tool for 3D printing. Imagine describing the parameters for a bracket or repair part in plaintext and the AI spitting out a usable printable file.
Definitely let us know how you find printing some of your Luma generated models. This field is always developing, and with the nature of AI, people often get wildly different experiences so great to get a broader range of how people find it.
What an Eye Opener!! I have basically designed everything I've printed using a 3D modeler... not a CAD system, simply because I know the modeling software well, and I find "fiddling with the design" far easier. For someone like myself, having AI design things I can't or find difficult, would be a huge help. Thinks like screw threads of different dimensions. Hinges for a desktop box. Interlocking parts & buckles. While we're at it, can it help with tolerance evaluation... ie, create a test print that measures screw thread print tolerances (both screw and screw hole) for my printer (a test setup). I like what' you're doing. Subscribed.
Interesting video! Small suggestion on your talking head shot. If you put the camera a little higher it will improve the shot. Putting the camera low can make it seem like you're "talking down" to the audience and vise versa. Level or slightly higher than eye level is ideal.
I'm not sure if you're familiar with liquid neural networks, but there are groups working on combining liquid neural networks allowing AI to learn from experience, not just through training data, changing and adapting it's own behaviour and thought processes, and this has been applied to the AI then designing from scratch, robots for speccific parts, right down to the materials and manufacturing methods. The AI then runs thoudsands of simulations with their designed robot, adapting it to overcome failures, until it finds one that works, then it supplies the final design along with a new AI to run that robot!
Thanks for this review of AI 3d model generation, I imagine some designers that sell models are becoming worried. Does the phone fit the stand whilst charging? The space for the charging cable looks a bit small.
Glad you liked it! I feel we are still a long way off being able to get exactly what you're picturing in your mind using AI. Whilst as I show in this video, you can get it to produce something that loosely fits what you're after, if you had a very specific design in mind, I feel we're still a while off it matching that. Especially taking into account dimesional accuracy etc. And that's not to mention multi-part designs that fit together or have mechanical functionality. But with all these things the rate the technolohy evolves is accelerating. Compare an image from Dalle2, to a MidJourney picture from 9 months later, or an AI video of Will Smith eathing Spaghtetti to anything by Sora, and it's night and day. I'd imagine we'll see the same rate of development here and soon we'll start seeing some real breakthroughs. As for the phone stand, that's a good question and not something I'd thought about. I have a few wireless charginig stands dotted around so usually don't charge using a cable. However, I just went and checked and yes, it actually fits and works fine. Obviously that may vary a bit depending on your phone and the cable but it worked fine for me. It was right on the edge though, if the rigid part of the cable had been any longer or thicker than I don't think it would have fitted, but yes, certainly can work! Were you interested in the phone stand? If people like the look of it and would actually like it I can possibly pop it on MakerWorld and Printables?
You can take those Models into Zbrush and clean up the inaccuracies and deficiencies. It would make it easier for Designers to go in do clean up over starting it from scratch.
You're very welcome. Yes definitely I'd say this is roughly the 3D equivalent of DallE2, so when we start getting the 3D equivalent of MidJourney in 9-12 months, it'll be pretty game changing!
Microsoft Copilot uses ChatGPT-based models as its backend, so it's not too surprising that the resulrs are similar. I wonder how Google's Gemini would do with this.
Definitely, but there are some tweaks with it's results so I thought it would be interesting to see. I did also test Gemini at the same time, if initially intended to use that instead of CoPilot, but everything it spat out was so broken that OpenSCAD refused to even run the code. It wasn't just like some of the ones in the video where it ran it but showed errors, it plain didn't run it. So I entered up deciding that wouldn't be contributing anything to the video and didn't include it. I'm sure it'll improve past that point, but so far I've not been blown away by Gemini in general unfortunately.
I’d been thinking about trying OpenSCAD, so this was disappointing :-/ I wonder how it would do if you gave it a complex model already created and just asked it to set the appropriate variables? (There’s a crazy-robust box generator I’ve need thinking about using to build a power supply box with a bunch of switches jack outlets, studs for mounting the PS itself, etc. It has so many variables, I’m wondering if CharGPT could make it simpler for me to just tell it what I want and then get all the right variables set.)
Don't let this video desuade you from trying OpenSCAD. OpenSCAD itself is fantastic for producing parametric models. This just shows that currently AI's aren't great at writing code for OpenSCAD so you'd still need to write it yourself. Which box generator have you been looking at? There's some great ones out there.
@@3DRevolution I forget which one it was offhand, as I’ve been sidetracked by work stuff the last couple of months. It was just insanely complete though, with many, many options for just about anything you could possibly think of to do to a box 😂 I know OpenSCAD isn’t inherently very difficult, it’s just that my bandwidth fir learning new tools related to 3D printing these days is almost non-existent :-/
Very interesting... I would be very much interested in a solution to use in CAD design for building REAL NASA Grade working robots, including STEP files of electronic parts that I can import into KiCad...
I've found that with ChatGPT you can usually tell it what it's done wrong, and it will correct the issue. Rarely is one single prompt enough for a working solution. For a new shape like a chair, you will want to start a new conversation, so that the context doesn't confuse it. Also copilot is the same backend as chatgpt, it's just a little bit worse. So I wouldn't expect much from there.
Oh absolutely, you often need to go back and feedback. I actually spent around 2 days trying to make it work by doing that on each of these, but the issues were never corrected. Because the results after multiple rounds of back and forth were pretty much identical to the results from the first prompt, I didn't think it was worth including them in the video, but in hindsight, perhaps I should have mentioned that I had given them that opportunity. As for the new shape, I do get what you mean, and I know that with the example I included of ChatGPT I did request the chair code in the same conversation as the rocket, but I didn't with the other to LLM's, and as I'd mentioned before, through my extensive tests with these, regardless of new conversations or not, it didn't really improve this. You are correct, Microsoft CoPilot does run a ChatGPT model, but as you say, it's Microsoft's own flavour of it. I was originally going to use Gemini as the third candidate here but in my preliminary tests (of which there were many), it refused to even once give me a code that OpenSCAD would even run with errors, it just refused to run it all together.
@@3DRevolution it's been awhile since I saw the episode but did you use GPT 4? Or 3.5? -- i've noticed that 4o and 3.5 both kind of suck. The regular GPT 4 (the one that still costs money) is like 10x better than the others. -- I've had flat out wrong and bad answers from 4o and 3.5, and when I ask the same question to GPT 4, it usually gets it right.
I like your insights! Have you tried Rodin by this company called Deemos? I've checked their trial, and it looks like they do a better job than Meshy, in terms of style and quality. Would love to see a review video!
I spent years making custom models in DaZ Studio only to find out they are not water tight and riddled with issues when attempting to 3d print. Would love a tool that uses AI to repair those issues and make the models 3d printable. Spent many painstaking hours making these models and although it's been several years of me coming back and attempting to make them 3d printable I am still left with nothing to show for it.
I love the idea of AI 3D model repair. Pop in a model and it makes the necesary changes to make it suitable for 3D printing! I'm sure this is something we'll see over the coming years!
Export the posed model as an obj or stl and import it into blender. Then use the remesh modifier on the model. This will approximate the shape and generate a clean mesh. You can search on google for a tutorial. Ai is not needed.
Very useful and informative video. I'm definitely going to try the video to model tool *Update- I tried lumalabs but it appears to have changed since this video was made and I couldn't find the video to model functionality*
I was about to come and correct him on how he should have been using the LLM's in round one. But then I realized he's using them just like a normal user would, not somebody who's more familiar with Ai. So, yes the LLM's he used could have done better if I were prompting them, the average non AI user will have the same results as him. So, nice. Also, I was not aware of the other AI tools.. I already wanted a 3d printer.. noooo now I neeeed one. heh
Exactly. The whole point of this video was looking to see how accessible and useful these tools were to the average person. That's why I was only looking at tools that are available to use for free, and didn't get into the nuances of how to be more proficient at prompt engineering etc. I could have written better prompts, recorded better videos for the NeRFs, and could have chosen more suitable 2D images, but the appeal of the concept of these sorts of tools is simplicity and ease of use, so I wanted to see how they'd deal with the sorts of interaction they would likely receive from someone who didn't want to or didn't have the time to become more experienced and/or specialised in it's use.
Can you go over tools to modify existing models? like if i have a model that was created from an existing part but has defects on surface that was carried over from damage that was on the model part. Thanks
Hey Jimmy, just so I understand you, do you mean how to modify models you've generated by scanning (like I did with the horses head here). So you could scan something that's damaged, repair it on the computer, then print a new one?
@@larin4587 agreed, there are many other factors ytbers leave out in these low brain entertainment videos for clicks, couldn't be bothered to put actual work in, the major hurdle in this is now power to processing, god's speed humanity.. or um.. computers
Good video, just two things: A seed is not "any big number", but a number to deterministically generate a pseudo random output (using a PRNG). Also true randomness in SE is actually possible, if a TRNG (true random number generator) is being used.
Thanks. Ah well yes that's kind of what I meant. A seed is basically any big number, but what it is used for is to generate a pseudo random output as you mention. I'm intrigued by the TRNG though. I'm not aware of any way to generate a truly random number. Yes you can write something that generated a number based on the millisecond and day of the month, or by taking an analogue reading of the radio interference on a wire, but technically that is still based on a manipulatable figure.
I tried this last december with gpt3 and it always failed. I tried to create a ring with a pair of lips on it 💍💋 . It never got it right after many tries. The ring was often oriented wrong. The lips never looked like lips, only a cylinder or some freak shape.
I would like an AI that clean messy meshes to perfection ... Retopology. Instead we get AIs, that generate monumental garbage meshes, and calls it a day :(
As I say in the video, look at generative AI. In 9 months we went from knightmarish videos of an alien morphing Will Smith with a dislocated jaw, to near picture (and physics) perfect puppies playing in snow. This is first generation of AI mesh generation, give it time.
@@3DRevolution Sure, I'll give it time. But, the art of logical and neat geometry in 3D isn't obvious, and the available examples to learn from isn't ubiquitous as video/images/text. I'll stay skeptical for now.
None of these really worked for me. I had designed some anime characters in 2d art and none of the tolls here produced anything useable. Guess I have to go learn 3d modeling now.
I tried meshy but AI never understands my requests. It couldn't make my mountain, or any of my characters or creatures. I think I need to rely on my own abilities or comissioning other artists.
That's not a bad thing. As interesting and as fun as it was to play around with these tools and make this video, there's absolutely nothing like the satisfaction of trying out something that you've made yourself for the first time. Whether it's flying a drone, watching a film, running some code, or hanging a painting. AI tools can be impressive and useful, but human driven creation is both more rewarding, and more enjoyable.
This was printed on a Bambu Lab X1C. There are many printers that can print multiple colours (and materials) in a single print. Some use different methods. The X1C has one extruder and once it's finished printing a colour for a layer it unloads the filament and loads the next filament, then prints that section of that layer and so on. Other printers have multiple extruders each with a filament loaded into them, and there are others. Each method has it's own advantages and disadvantages. It is possible to print more simplictic multi-colour prints without having a multi-material setup by doing as you suggested, changing the filament mid-print. It's even possible in your slicer set the points you want to do this and so the printer will automatically pause the print and wait for you to replace the filament before it continues, but this method only allows you to have different colours on different layers of the print. Not multple colours on a single layer.
@@mrteemug5329 I'm fully aware that AI has the power to solve some major problems in the world, or break everything we've built. But also that there is no stopping it now, the genie is out the bottle and one way or another, people will develop and improve it, so hell, enjoy using the fun AI tools while we can 😅
@@3DRevolution yeah, pandoras box has already been opened, no stopping it at this point. I can relish my memories of living in a time with no smart devices and a much simpler life. So long as the future is no skynet Ill be alright lol :D
@@mrteemug5329 Yeah, the explosion of technology in my lifetime is quite amazing. I grew up with a wooden CRT TV that you had to manually tune into the frequency of a channel, with movies saved to spools of magnetic tape in a plastic box, and when a phone meant a thing connected by a coiled cord to the wall. And now I can have a long phisophical spoken conversation with an artificial intelligence whilst I monitor my 3D printer via a video feed on my watch.
By that I was referring into a slicer, which is software used to prepare a 3D model for 3D printing, effectively you feed in a 3d mkfel, the specifications of the machine you'll be printing with, and your desired print settings, and a slicer crates a file which is just a list of machine commands for the printer to follow. In this video I used both PrusaSlicer and Bambu Studio (two of the most popular slicers).
Definitely, but at the time of recording this video GPT4o wasn't yet a thing and as I was intentionally only covering tools available for free, 3.5 was the only one to trial here.
I absolutely have. There were several other tools I tried out in prep for this video but I gound the results from 3D AI Studio tended to be similar to ones I was covering already here so didn't feel it was adding anything new to the video. But yeah, it's definitely worth a play with, I enjoyed it.
@@JanHMR That's true, but as texture isn't currently too useful for 3D printing that wasn't something I was factoring in. But it is definitely a cool feature.
@@3DRevolution True that! Excited to see how the space evolves. i think in the future we will be able to generate 3d models of whatever we can imagine in production quality, exciting times
At the time of recording this video, 4o hadn't yet been announced, and as I mention, in this video I'm explicitly looking at tools and services that are available to use for free, so 3.5 was the only version of ChatGPT to choose from at the time. That said I do have GPT4 and had already tried this with that at the time, and have tested it with 4o since with near identical results. The only difference is, once I get 4o literally gave me an actual functional STL to download. However it just contained an egg shape, and through the 20+ further attempts, after it told me it was giving me an STL it would then come back and say it doesn't yet have the capability to make STL files, even after me reminding it that it already had. And I found the results of using 4o with writing code for OpenSCAD identical to in this video.
Unrelated to 3D Printing but: I used to pay people all the time to get custom python scripts made, usually for automating file sorting/transfers/conversion etc. Ever since the AIs came out, I have not had to pay a single cent to anyone anymore. The AI can do it all. ChatGPT is really bad at coding and fixing errors. Copilot usually gets it on the first or second try. The only issues is that theres a 5 message limit, and a character limit, so if you dont get it working in 5 attempts, you basically have to start over. I think a better way for you to test the ChatpGPT and Copilot OPENSCAD generation, would be to tell it the errors, and to fix them, rather than you correcting them yourself. Usually when I get errors, I just copy paste the console output into the AI, and it automatically picks up on the error, and suggests various ways to correct it, or I just give it the ,,no yapping" prompt, and ask for a solution
I absolutely appreciate that and I'd actually spent an entire day doing that with each of them before producing this video, but no matter how much back and forth I did with them, they never improved with this unfortunately. It gave so little that I just didn't think it was worth including in the video. Thanks for the suggestion though, I'll definitely be doing more and different tests and comparisons in the future.
It’s devastating and I hate it… Modeling things in 3d is literally the must aspect of being a 3d Artist and now everyone without skill and dedication can create useless things like this…
I'm a fan of how you put this content together, but I can't fathom why you think running against 3.5 would result in content that provides any benefit for anyone. I am sure you are aware of the utter night and day different in reasoning between the two. Hopefully you are also aware that even GPT-4o does barely any better at reasoning within a 3D parametric space. That video would have at least been useful and insightful.
Human language as opposed to computer language, it's a common term. Similar to 'natural language' or 'natural spoken language'. It's also popular for people who enjoy human music, like those classics, 'boop', 'beep', and 'ding'. :')
@@3DRevolution You make an interesting point. There's a phenomenon that if people know that someone made something with AI they will appreciate it less (often far less). This is then associated with it being "AI-made" and for some reason people forget it was a human using the AI to make it. So, I wonder if it does boil down to this evolutionarily early concept of "us" & "not us." Like, people don't usually use disgust emotions to describe how something of decently high aesthetic quality. But disgust does usually come from a place of deep evolutionary biology. I also wonder how disgusted people were at the industrial revolution. Edit: And, what does it mean about those of us who don't devalue stuff for having been made with AI? 🤔
@@RubelliteFae are you familiar with the term Luddite? Commonly used these days to refer to someone who dislikes or is afraid of technology or technological change/development. It actually originates from a community of textile workers in Nottingham, England in the 19th century, who protested the invention of industrial sewing machines that they felt threatened their industry. They conducted raids and sabotage on machines and revieered a mythical man by the name of Nic Ludd from the 1700s, the namesake of the Luddites. I feel with every technological or societal change we experience a modern version of this, and with AI it threatens to be both a technological AND a societal change.
@@3DRevolution Yeah, I knew about the Luddites and the term neoluddite. I suppose I had presumed that happened due to panic & fear, not revulsion. Sure, the former has been happening with AI, but the latter is what has taken me by surprise. It's strange. We've known this revolution is coming my whole 41 years of living (with some even predicting 2020s as the timeframe), but so few seem to be prepared for it.
@@RubelliteFae it's difficult for the masses to truly believe change is coming until they see it, and if it's a drastic change, any tale that is foretold of it just seems fantastical.
Nope, chatGPT doesn't have an understanding of what a rocket looks like. It's a language model, not a general inteligence. It's picked up a pattern in langague use. It's spotted a pattern of setting variables in scrupts, so it's coppied that. It doesn't "understand" what a variable is, or how it functions. Computer code is a language, so it can produce something that convincingly follows the rules. That doesn't mean it actually works! Much like if you get it to write an essay. Surface level, it looks impressive. But if you actually know the subject, it kinda meaningless. You'd be better off seaching Stack Exchange. Which is pretty much what LLMs did when learning what code looks like. Generative AI is a useful tool, but it has to be used very carefully. It's not an information resorce, as much as Google & MS are trying to push it into one. It cannot fact check. Using generative AI incorrectly can be dangerous. Uh. despite my rambling, good video, & an interesting experement! :)
Haha I appreciate your rambling and your comment. I completely agree that it's not what LLMs are designed for or intended to do, but I thought it would be interesting to see how they'd perform.
All the sites and services I covered in this video are available to use for free. If people were to pay to use the services more, they'd only be doing so after having already used it for free and therefore would be confident that the quality of the output was suitable for them. Also remember generative AI improves with use and further training. Some people may even subscribe to these services just to fund their continual existance as an investment for what it will become. Look at how text to image and text to video generative AI improved in just 9 months. Paraphrasing Sam Altman from earlier this year, 'ChatGPT 4 is amazing and has is constantly aweing and amazing people, but ChatGPT 4 is the worst it will ever be'. The same could be said for these too. They will only get better.
I don't mean to show up just to drag a video but this popped up on my feed so I'm gonna throw this thought out there. Let this AI nonsense die, it's not good and I really doubt it ever will be. The majority of generative llm are just a scam, which is especially evident in the 3D generative tools you showcased. The fact that they're charging to create a mediocre and mostly unusable mesh with no consistency in output should be pretty telling. Add on the problematic creation of these generative products using massive amounts of unlicensed, uncredited, uncompensated work and you risk promoting exploitation. The idea of typing an idea and getting a 3D model is amazing, but the reality of what you get is far from that. Judging from the history of LLM companies it will never reach that idea either.
I totally get that some people aren't a fan of the changes that various AI is bringing, and I straddle the fence on it myself. However, I'd be very interested to know what you mean by most LLM's being a scam. I use LLM's almost daily and they do exactly what they say on the tin. As for the quality of these models, I do agree, the quality is not there... yet. As I mention in the video, every tool I covered here is available to use for free (to a limit) right now. Of course they will charge for excessive usage as running an AI server farm costs a huge amount of money. But also they need to get as many people using it as possible because that is how it will improve. They give people a chance to use their tools for free so people are only going to pay to use their service more if they've already tried it (for free) and are happy that the quality meets their requirements. As I mention in the video, look at those horrifying and laughghable AI generated videos of Will Smith eating spaghetti. That was completely unusable and pretty much a joke and was the best we could expect at the time. 9 months later and SORA is announced demoing almost photo realistic minute long video clips. It took like 3 days for the world to realise the pope hadn't suddenly become fassionable after MidJourney hit the internet, and that was almost a year ago with plenty of improvement since then. This is early days for this technology and it's evolving quickly. Yes there is questions around the training data but that's a whole separate conversation.
@@3DRevolution The training data is generally the forefront of the problem in my eyes, but moving past that to the note of calling it a scam. I call it a scam because the technology has been "getting better" or "only improving" for years. The best I've seen is simply a different option in a production pipeline, often worse and rarely better. Especially when a company is selling an amount of generations. It's more akin to a mobile game with random lootboxes, or slot machines than an actual art tool I think it's also important to note that these tools are never developed as a tool for artists. it's not meant to support but replace, and it's proven time and again that it can't do that. In the realm of 3D models generative tech skips to the end product and it's virtually useless, there is nothing of a functional 3D pipeline and I've seen no intent of taking that direction. The only successful application I've seen for any generative ai tech is to create internet spam (or worse)
@@gabrielspangler6964 The training data is a difficult one and I am divided on that. On the one hand I don't think they should be just taking other peoples protected work without permision, and as a creative myself I empathise with that side of things. However, equally I feel that on the whole, the various forms of generative AI aren't copying. If I knew that a photo of me had been used to train MidJourney, I could write a detailed prompt describing that photo exactly and run it through MidJourney 1 million times. It would never just reproduce the picture of me. It would create a new picture inspired by it's training data. But the thing is, weirdly, that's just human. Everything we all produce is inspired both by our experience (aka external influence, which is akin to receiving a prompt), and inspiration taken from work we've seen of others. No professional photographer got to where they were having never seen a single other photo by anyone in their entire life. They'd learned and studied the work of photographers they liked and that influenced their own work. The exact same could be said for writers, directors, programmers, musicians, the list is kind of endless. Anyway, that's my two cents on the training data debackle. No it's not right to take other peoples work without asking, but also, no it's not copying it, it's just taking inspiration from it like any human creator. Yes I understand the technical mechanics of it are different, but that's the crux of it. You say it's a scam and that it's been 'only improving' for years and you specifically referred to LLMs. ChatGPT, arguably the first true publically accessible LLM only came out in November 2022. So LLM's have only been around publically for 18 months, so they have not been publically around for years, let alone improving. Despite that, in that time they have improved greatly. I think it may be helpful to hone in on what exactly you're referring to here. You refer to AI with regards to art, whilst you say "Let this AI nonsense die". From this I'm presuming you're reffering exlusively to generative AI because even if you're not a fan of what it can produce and do, AI in general has in those 18 months made monumental changes in various fields. From overtaking 3 decades of protein folding research in a matter of hours, to discovering new cancer treatments. But even just keeping to the generative AI's such as LLMs and image generators, they have developed a lot and are becoming increasingly useful, especially LLMs. I have several friends who are very high end programmers with decades of experience and in charge of whole teams, who now use LLMs on a daily basis, helping with things like error finding, or even writing or theorising new code. The vocal capabilities which are also improving are even allowing LLMs to become a useful tool in things like speech and language therapy. Again, I know the models generated in this video aren't award winning, but considering how early generative 3D models is in the game, I think it's incredibly impressive, and honestly in a year I wouldn't be surprised if you could get production quality designs out of some of these tools. Saying AI is entirely a joke based on the first generation of 3D models is like saying any great artist was a joke and would never amount to anything based on their scriblings as a 1 year old. This is early days and the technology is developing at a very quick rate.
@@3DRevolution I disagree with the concept that it's inspired like a human. These models are programmed using other's work. They are a product that is made by using images, writing, video, voice, etc. without permission. Any other product would be immediately flagged for that. Yes I am specifically talking about generative ai. I've been following a lot of professionals in various industries and a general consensus among most of them, is that generative ai isn't helpful. Or at least isn't helpful enough to justify it's use. And there are far to many cases of creative professionals being replaced or paid less because of ai. Personally any generative ai I've been given to work from has been a hindrance. I've heard the promises of what it can do if it gets better, but I've also seen the reality of what it's done. Suffice to say I'm unimpressed, and I don't think it's worth exploring more in it's current direction
The AI generation is quite hit or miss. Absolutely not giving you an end-game product yet, but considering what it's doing, it's still pretty remarkable. Going from having nothing in the world like this, to being able to just write down a sentence of text and have a 3D model created for you, that is pretty game changing. Iterative steps, we'll get there eventually.
Are you showing the first response for each ai and didn't try multiple responses? The fact you didn't show multiple for each using the same prompts makes me think this is a bit of a wank advertisement
I ran multiple passes on each, spending pretty much a day on it to see if anything improved the results, including follow up prompts etc. However, nothing seemed to improve the output in any way so I just used the first pass from each. I didn't show multiple attempts for each firstly because this video had a lot to cover, and because there was no interesting or impressive results I saw no benefit to drawing out the more visually boring section of the video. As for it being a "wank advertisement", I'm not really sure what you mean as I basically showed that this method didn't really work, and summaries saying that all the LLMs, plus OpenSCAD are great in their own right, but not for this purpose. I'm sorry you didn't linger my video but assure you the results of these tests and comparisons were not curated or fabricated.
someday researchers will be forced to tackle the much much harder and more interesting problem of producing usable meshes...
Have the AI replace the cloud point mesh with a parametric model. Done. As expensive as Dessault Systems and AutoDesk products are, I feel like should have already happened by now.
@@SeattleShelbywhat do you mean I’m so confused
Use photogrammetry apps, will be much better results like luma. But with a mobile phone, if you do not know how, the model will always be a bit unclear.
@@chrisblaser1799i use reality scan for mobile. Althought im bad at scanning (I asume)
Sadly, genAI developers would rather try to develop a meh way of copying & replacing artists, than something actually useful to us.
Where's my auto UV unwrap?
Wheres my tool for seamless textures?
Where's my tool for retopologising scultps?
Cool stuff! Even just for getting rough references into one's preferred program instead of setting up canvases from photographs and whatnot.
I loved the entire video! How you presented each different tool had me sitting on the edge of my seat, both hoping for an elegant success or a total disaster. So well organized, presented and edited. Looking forward to your next video on this rapidly developing topic! -Courtney
Hey Coutney, thanks very much. Really glad you enjoyed it, always nice to hear from a fellow creator! Get in touch if you're ever interested in telling a Filament Story together, Happy Printing!
With llm's you should have used follow up prompts, for example "the elements are there but not in the correct position" or "the script generates an error because there is no cone object"
Yeah I actually spent a couple of days trying this with several LLMs and was absolutely trying feedback questions to iteratively improve the code. However, I found this had very little impact on the positioning and orientation of the models. So little in fact it wasn't even worth showing in the video when I did the final test between each of the LLMs.
I would definitely be interested in follow up videos with the newer models. This would a be a serious space of innovation as the LLM's are more properly trained on such to perform better.
Follow up videos will definitely be on their way!
Meshy’s text to 3D generator didn’t work well for me so I tried using Copilot Designer’s text to image first (prompting for a 3d image) and then ran the image through Meshy’s image to 3D generator. That worked pretty well. Well enough for a prototype.
Update: Instant Mesh did a much better job than Meshy’s in converting Copilot Designer’s 3D image to a 3D file.
Thanks for the research!
Thanks!
Many thanks how much it cost you over all for protype .
Very well done overview of what's currently possible with AI & 3D printing! This is gold!
Thanks very much, glad you liked it! There's definitely some exciting stuff on the horrizon!
@@3DRevolution Almost everything you said has been made obsolete over the last 2 days.
Someone figured out a new prompting technique to give LLMs spatial awareness.
And someone else figured out a way to use knowledge graphs to give LLMs much more intuition, and vastly more robust prompt interpretation.
@@jtjames79based on this information you’re sharing, what is the best tool for generating an stl using AI?
@@jtjames79 Hey, can you explain a little further what has changed or whats the best to use/watch for now?
@@jtjames79can you give some info? Im curious and not just me.
I like the lmm open scad model. Looks like it gave you all the shapes you just need to move them around
My experience with chatgpt & openSCAD was very similar to yours. After several rounds of corrections, the LLM failed to grasp what I was after, and I ended up writing it from scratch. I tried to be as precise as possible in my descriptions of each geometric part for the LLM, but it didn't get the job done. So it's interesting but not useful. At best, it can rapidly create an almost-correct code that you have to tinker with manually.
Thank you for all your experimentation and explanations - what an exciting time to be involved with 3D Printing, scanning and AI assistance. Keep up the great work!
Thanks very much! Yes it's definitely an exciting time with the rate of development and innovation in this field!
Been trying this too, Apps that understand "real world" STL output like OpenSCAD, FreeCAD, Blender are going to be very interesting once AI get integrated properly. I suspect the AI/LLMs are going to be more specialized and perhaps simpler than current LMMs that try to do everything. Makehuman understands the kinematics of body shapes, the AI only needs to modify the variables that are used by Makehuman to deign the body and pose. The Makehuman plugin to Blender does the same. AI with OpenSCAD for robot design will be interesting, I use it for model trains, wheels, motors and gears but robots also use motors n gears....
When it gets to the point you can prompt an AI to design a 3D printable fully functional mechanical design, that'll be truly game changing!
@@3DRevolution Breaking Taps did vid on emulated biology joints for tiny robots. Add some AI procedural geodesic bones and humanoid bots are much stronger and lighter. Most bots today are over engineered subtractive machined and ugly.
Amazing, what a time to be alive!
The exponential growth of AI is cool to watch. In 18 months it is going to be an awesome usable tool for 3D printing. Imagine describing the parameters for a bracket or repair part in plaintext and the AI spitting out a usable printable file.
And 12 months later we'll all have chips implanted in our brain meaning we need but think of something and it'll be printed for us, haha
I've used Luma for making models in a game but always wondered how good they would work for 3d printing might have to try it out.
Definitely let us know how you find printing some of your Luma generated models. This field is always developing, and with the nature of AI, people often get wildly different experiences so great to get a broader range of how people find it.
Awesome dude! Thank you so much for this. Instant mesh is exactly the kind of thing I was looking for.
You're most welcome. This space is going to be evolving so much, looking forward to seeing what's next.
These programs are very interesting! I think I'll try Meshy or Luma services in the future.
What an Eye Opener!! I have basically designed everything I've printed using a 3D modeler... not a CAD system, simply because I know the modeling software well, and I find "fiddling with the design" far easier. For someone like myself, having AI design things I can't or find difficult, would be a huge help. Thinks like screw threads of different dimensions. Hinges for a desktop box. Interlocking parts & buckles. While we're at it, can it help with tolerance evaluation... ie, create a test print that measures screw thread print tolerances (both screw and screw hole) for my printer (a test setup). I like what' you're doing. Subscribed.
This is all super easy to do after going through a few basic tutorials on any of the major cad programs like fusion 360 or onshape
i love ideas like that. if it doesn't work, just try something different or something completely new just for fun
Thank You for a GREAT overview 👍😃. It saved me from doing hours of research.
Really glad it helped :)
In the LLM/OpenSCAD tests, you could have revisited the chats and asked them to fix the code by telling them what didn't turn out.
Interesting video! Small suggestion on your talking head shot. If you put the camera a little higher it will improve the shot. Putting the camera low can make it seem like you're "talking down" to the audience and vise versa. Level or slightly higher than eye level is ideal.
We shopped for our wedding photographer by… height
3D model generation and robot kinematics / robot decision making will be the next big things. They are inextricably linked.
I'm not sure if you're familiar with liquid neural networks, but there are groups working on combining liquid neural networks allowing AI to learn from experience, not just through training data, changing and adapting it's own behaviour and thought processes, and this has been applied to the AI then designing from scratch, robots for speccific parts, right down to the materials and manufacturing methods.
The AI then runs thoudsands of simulations with their designed robot, adapting it to overcome failures, until it finds one that works, then it supplies the final design along with a new AI to run that robot!
interesting! Can’t wait to see how it improves!
Video was interesting, production good too. I learnt about some new tools I plan to play with. Thanks and keep it up!
Glad you liked it and hope it leads to some fun projects. As always, plenty more to come! Thanks, and happy printing!
intro monologue is top tier today lmao
Haha, I did get a bit carried away with this one.
Thanks for this review of AI 3d model generation, I imagine some designers that sell models are becoming worried. Does the phone fit the stand whilst charging? The space for the charging cable looks a bit small.
Glad you liked it!
I feel we are still a long way off being able to get exactly what you're picturing in your mind using AI. Whilst as I show in this video, you can get it to produce something that loosely fits what you're after, if you had a very specific design in mind, I feel we're still a while off it matching that. Especially taking into account dimesional accuracy etc. And that's not to mention multi-part designs that fit together or have mechanical functionality.
But with all these things the rate the technolohy evolves is accelerating. Compare an image from Dalle2, to a MidJourney picture from 9 months later, or an AI video of Will Smith eathing Spaghtetti to anything by Sora, and it's night and day. I'd imagine we'll see the same rate of development here and soon we'll start seeing some real breakthroughs.
As for the phone stand, that's a good question and not something I'd thought about. I have a few wireless charginig stands dotted around so usually don't charge using a cable. However, I just went and checked and yes, it actually fits and works fine. Obviously that may vary a bit depending on your phone and the cable but it worked fine for me. It was right on the edge though, if the rigid part of the cable had been any longer or thicker than I don't think it would have fitted, but yes, certainly can work!
Were you interested in the phone stand? If people like the look of it and would actually like it I can possibly pop it on MakerWorld and Printables?
This is a great overview of the actual state of AI generated 3D modeling. Thank you for this video.
You're really welcome, thanks, and happy printing!
Copilot and CHAT_GTP are based on the same engine but trained differently... thus you can't expect good result if the data was not trained accordingly
Well done! Thank you for making this video. It was a huge time saver for me.
Really glad to hear it. Thanks for commenting!
You can take those Models into Zbrush and clean up the inaccuracies and deficiencies. It would make it easier for Designers to go in do clean up over starting it from scratch.
Great video explanation, thanks for doing the investigation and sharing your results👍
- "Interesting". Thx for sharing.
- A long way to go, still - but it's a start...
You're very welcome. Yes definitely I'd say this is roughly the 3D equivalent of DallE2, so when we start getting the 3D equivalent of MidJourney in 9-12 months, it'll be pretty game changing!
Microsoft Copilot uses ChatGPT-based models as its backend, so it's not too surprising that the resulrs are similar. I wonder how Google's Gemini would do with this.
Definitely, but there are some tweaks with it's results so I thought it would be interesting to see.
I did also test Gemini at the same time, if initially intended to use that instead of CoPilot, but everything it spat out was so broken that OpenSCAD refused to even run the code. It wasn't just like some of the ones in the video where it ran it but showed errors, it plain didn't run it. So I entered up deciding that wouldn't be contributing anything to the video and didn't include it.
I'm sure it'll improve past that point, but so far I've not been blown away by Gemini in general unfortunately.
OMG A channel that i need i needed so bad to learn everything OMG unreal
Very interesting. Not that great yet, but amazing that it works at all.
Excellent work. Much appreciated.
great video as usual thanks
Can you get an articulated model from any of the AI programs? Like an articulated dog for 3D printing?
A little news on this coming very soon ;)
instantmesh seams to have the most potential, I wish it was a bit betting at using drawing instead of photos for generating model bases
In time I'm sure we'll see this sort of thing improved
I’d been thinking about trying OpenSCAD, so this was disappointing :-/
I wonder how it would do if you gave it a complex model already created and just asked it to set the appropriate variables? (There’s a crazy-robust box generator I’ve need thinking about using to build a power supply box with a bunch of switches jack outlets, studs for mounting the PS itself, etc. It has so many variables, I’m wondering if CharGPT could make it simpler for me to just tell it what I want and then get all the right variables set.)
Don't let this video desuade you from trying OpenSCAD. OpenSCAD itself is fantastic for producing parametric models. This just shows that currently AI's aren't great at writing code for OpenSCAD so you'd still need to write it yourself.
Which box generator have you been looking at? There's some great ones out there.
@@3DRevolution I forget which one it was offhand, as I’ve been sidetracked by work stuff the last couple of months. It was just insanely complete though, with many, many options for just about anything you could possibly think of to do to a box 😂
I know OpenSCAD isn’t inherently very difficult, it’s just that my bandwidth fir learning new tools related to 3D printing these days is almost non-existent :-/
Thank you for sharing, quite impressive!
You're most welcome!
Very interesting... I would be very much interested in a solution to use
in CAD design for building REAL NASA Grade working robots,
including STEP files of electronic parts that I can import into KiCad...
I've found that with ChatGPT you can usually tell it what it's done wrong, and it will correct the issue. Rarely is one single prompt enough for a working solution.
For a new shape like a chair, you will want to start a new conversation, so that the context doesn't confuse it.
Also copilot is the same backend as chatgpt, it's just a little bit worse. So I wouldn't expect much from there.
Oh absolutely, you often need to go back and feedback. I actually spent around 2 days trying to make it work by doing that on each of these, but the issues were never corrected. Because the results after multiple rounds of back and forth were pretty much identical to the results from the first prompt, I didn't think it was worth including them in the video, but in hindsight, perhaps I should have mentioned that I had given them that opportunity.
As for the new shape, I do get what you mean, and I know that with the example I included of ChatGPT I did request the chair code in the same conversation as the rocket, but I didn't with the other to LLM's, and as I'd mentioned before, through my extensive tests with these, regardless of new conversations or not, it didn't really improve this.
You are correct, Microsoft CoPilot does run a ChatGPT model, but as you say, it's Microsoft's own flavour of it. I was originally going to use Gemini as the third candidate here but in my preliminary tests (of which there were many), it refused to even once give me a code that OpenSCAD would even run with errors, it just refused to run it all together.
@@3DRevolution it's been awhile since I saw the episode but did you use GPT 4? Or 3.5? -- i've noticed that 4o and 3.5 both kind of suck. The regular GPT 4 (the one that still costs money) is like 10x better than the others. -- I've had flat out wrong and bad answers from 4o and 3.5, and when I ask the same question to GPT 4, it usually gets it right.
I like your insights! Have you tried Rodin by this company called Deemos? I've checked their trial, and it looks like they do a better job than Meshy, in terms of style and quality. Would love to see a review video!
Glad you like my videos, thanks for the comment.
Regarding Rodin, watch this space 😉
You should have chat GPT write a prompt for Tripo to see if it can get more detail based on the more detailed prompt that GPT could give
A nice idea actually, perhapse something I'll try in a future video. Good suggestion.
I spent years making custom models in DaZ Studio only to find out they are not water tight and riddled with issues when attempting to 3d print. Would love a tool that uses AI to repair those issues and make the models 3d printable. Spent many painstaking hours making these models and although it's been several years of me coming back and attempting to make them 3d printable I am still left with nothing to show for it.
I love the idea of AI 3D model repair.
Pop in a model and it makes the necesary changes to make it suitable for 3D printing! I'm sure this is something we'll see over the coming years!
Theres a few of the models I think gen 3's and below that are watertight
Export the posed model as an obj or stl and import it into blender. Then use the remesh modifier on the model. This will approximate the shape and generate a clean mesh. You can search on google for a tutorial. Ai is not needed.
couldn't you smooth it out with cubify or nurbs??
Which one are you referring to? With most of these the issue is innacurracies or lack of detail so smoothing out wouldn't necesarily help.
Very useful and informative video. I'm definitely going to try the video to model tool
*Update- I tried lumalabs but it appears to have changed since this video was made and I couldn't find the video to model functionality*
I was about to come and correct him on how he should have been using the LLM's in round one. But then I realized he's using them just like a normal user would, not somebody who's more familiar with Ai. So, yes the LLM's he used could have done better if I were prompting them, the average non AI user will have the same results as him. So, nice.
Also, I was not aware of the other AI tools.. I already wanted a 3d printer.. noooo now I neeeed one. heh
Exactly. The whole point of this video was looking to see how accessible and useful these tools were to the average person. That's why I was only looking at tools that are available to use for free, and didn't get into the nuances of how to be more proficient at prompt engineering etc.
I could have written better prompts, recorded better videos for the NeRFs, and could have chosen more suitable 2D images, but the appeal of the concept of these sorts of tools is simplicity and ease of use, so I wanted to see how they'd deal with the sorts of interaction they would likely receive from someone who didn't want to or didn't have the time to become more experienced and/or specialised in it's use.
Can you go over tools to modify existing models? like if i have a model that was created from an existing part but has defects on surface that was carried over from damage that was on the model part. Thanks
Hey Jimmy, just so I understand you, do you mean how to modify models you've generated by scanning (like I did with the horses head here). So you could scan something that's damaged, repair it on the computer, then print a new one?
LOL knew you'd be all over AI
Haha, just as AI is all over literally everything these days.
is there any ai tools that can alter a 3d object?
Watch this space 😉
How does one start investing in these AI models?
Well you start by putting a gas mask on a horse...
Wat slicer are you using
In this video I'm using Bambu Studio by Bambu Labs.
claude is terrifying!
More terrifying than a horse in a gas mask?
@@3DRevolutionyes!
Within a couple of years, this will work perfectly
this tech of 3d scan to 3d mesh has been around on iphone apps for 10+ years, its not improved much
3d generation seems to be improving at a very slow pace. I would not bet on "working perfectly" in few years. Usable? Maybe.
@@larin4587 agreed, there are many other factors ytbers leave out in these low brain entertainment videos for clicks, couldn't be bothered to put actual work in, the major hurdle in this is now power to processing, god's speed humanity.. or um.. computers
Good video, just two things: A seed is not "any big number", but a number to deterministically generate a pseudo random output (using a PRNG). Also true randomness in SE is actually possible, if a TRNG (true random number generator) is being used.
Thanks. Ah well yes that's kind of what I meant. A seed is basically any big number, but what it is used for is to generate a pseudo random output as you mention.
I'm intrigued by the TRNG though. I'm not aware of any way to generate a truly random number. Yes you can write something that generated a number based on the millisecond and day of the month, or by taking an analogue reading of the radio interference on a wire, but technically that is still based on a manipulatable figure.
Mamba will perform better because it is a state space model
Remarkable vid, this is wild stuff.
I tried this last december with gpt3 and it always failed. I tried to create a ring with a pair of lips on it 💍💋 . It never got it right after many tries. The ring was often oriented wrong. The lips never looked like lips, only a cylinder or some freak shape.
17:58 I've attempted a NERF app, I forget which, but man does it always come out crap.
I would like an AI that clean messy meshes to perfection ... Retopology.
Instead we get AIs, that generate monumental garbage meshes, and calls it a day :(
As I say in the video, look at generative AI. In 9 months we went from knightmarish videos of an alien morphing Will Smith with a dislocated jaw, to near picture (and physics) perfect puppies playing in snow.
This is first generation of AI mesh generation, give it time.
@@3DRevolution Sure, I'll give it time.
But, the art of logical and neat geometry in 3D isn't obvious, and the available examples to learn from isn't ubiquitous as video/images/text.
I'll stay skeptical for now.
None of these really worked for me. I had designed some anime characters in 2d art and none of the tolls here produced anything useable. Guess I have to go learn 3d modeling now.
Cool thoughts on scad
I tried meshy but AI never understands my requests. It couldn't make my mountain, or any of my characters or creatures. I think I need to rely on my own abilities or comissioning other artists.
That's not a bad thing. As interesting and as fun as it was to play around with these tools and make this video, there's absolutely nothing like the satisfaction of trying out something that you've made yourself for the first time. Whether it's flying a drone, watching a film, running some code, or hanging a painting.
AI tools can be impressive and useful, but human driven creation is both more rewarding, and more enjoyable.
I am new here.... how did you 3d print the colors? Did you change the pla ffilament
Fancy printer that can do more than 1 colour
This was printed on a Bambu Lab X1C. There are many printers that can print multiple colours (and materials) in a single print. Some use different methods.
The X1C has one extruder and once it's finished printing a colour for a layer it unloads the filament and loads the next filament, then prints that section of that layer and so on.
Other printers have multiple extruders each with a filament loaded into them, and there are others.
Each method has it's own advantages and disadvantages.
It is possible to print more simplictic multi-colour prints without having a multi-material setup by doing as you suggested, changing the filament mid-print.
It's even possible in your slicer set the points you want to do this and so the printer will automatically pause the print and wait for you to replace the filament before it continues, but this method only allows you to have different colours on different layers of the print. Not multple colours on a single layer.
@@3DRevolution cool thanks.
My job as a design engineer will surely be replaced by AI in 10-15 years. Ill have to figure something else out
Don't worry, there'll be another 8 billion of us right along there with you.
@@3DRevolution how reassuring 😂
@@mrteemug5329 I'm fully aware that AI has the power to solve some major problems in the world, or break everything we've built. But also that there is no stopping it now, the genie is out the bottle and one way or another, people will develop and improve it, so hell, enjoy using the fun AI tools while we can 😅
@@3DRevolution yeah, pandoras box has already been opened, no stopping it at this point. I can relish my memories of living in a time with no smart devices and a much simpler life. So long as the future is no skynet Ill be alright lol :D
@@mrteemug5329 Yeah, the explosion of technology in my lifetime is quite amazing. I grew up with a wooden CRT TV that you had to manually tune into the frequency of a channel, with movies saved to spools of magnetic tape in a plastic box, and when a phone meant a thing connected by a coiled cord to the wall.
And now I can have a long phisophical spoken conversation with an artificial intelligence whilst I monitor my 3D printer via a video feed on my watch.
“Let’s import it” into what?
By that I was referring into a slicer, which is software used to prepare a 3D model for 3D printing, effectively you feed in a 3d mkfel, the specifications of the machine you'll be printing with, and your desired print settings, and a slicer crates a file which is just a list of machine commands for the printer to follow.
In this video I used both PrusaSlicer and Bambu Studio (two of the most popular slicers).
I have found that GPT3.5 is utter shit when creating new things.
You really need to go for GPT4 (turbo or the new o version, latter is cheapest)
Definitely, but at the time of recording this video GPT4o wasn't yet a thing and as I was intentionally only covering tools available for free, 3.5 was the only one to trial here.
nice
3D Ai Studio is THE BEST for 3D Printing, have you tried that?
I absolutely have. There were several other tools I tried out in prep for this video but I gound the results from 3D AI Studio tended to be similar to ones I was covering already here so didn't feel it was adding anything new to the video. But yeah, it's definitely worth a play with, I enjoyed it.
@@3DRevolution Agreed, simelar to tripo etc. But they have Realtime Texture Generation, Upscale Texture, etc.
I like that with that
@@JanHMR That's true, but as texture isn't currently too useful for 3D printing that wasn't something I was factoring in. But it is definitely a cool feature.
@@3DRevolution True that!
Excited to see how the space evolves. i think in the future we will be able to generate 3d models of whatever we can imagine in production quality, exciting times
request.
a tutorial on adding openscad to ollama.
a tutorial on using topological optimization
Using GPT 3.5 is itself a lost cause. You have to use GPT 4 or 4o
At the time of recording this video, 4o hadn't yet been announced, and as I mention, in this video I'm explicitly looking at tools and services that are available to use for free, so 3.5 was the only version of ChatGPT to choose from at the time.
That said I do have GPT4 and had already tried this with that at the time, and have tested it with 4o since with near identical results.
The only difference is, once I get 4o literally gave me an actual functional STL to download. However it just contained an egg shape, and through the 20+ further attempts, after it told me it was giving me an STL it would then come back and say it doesn't yet have the capability to make STL files, even after me reminding it that it already had.
And I found the results of using 4o with writing code for OpenSCAD identical to in this video.
Unrelated to 3D Printing but: I used to pay people all the time to get custom python scripts made, usually for automating file sorting/transfers/conversion etc.
Ever since the AIs came out, I have not had to pay a single cent to anyone anymore. The AI can do it all.
ChatGPT is really bad at coding and fixing errors. Copilot usually gets it on the first or second try. The only issues is that theres a 5 message limit, and a character limit, so if you dont get it working in 5 attempts, you basically have to start over.
I think a better way for you to test the ChatpGPT and Copilot OPENSCAD generation, would be to tell it the errors, and to fix them, rather than you correcting them yourself.
Usually when I get errors, I just copy paste the console output into the AI, and it automatically picks up on the error, and suggests various ways to correct it, or I just give it the ,,no yapping" prompt, and ask for a solution
I absolutely appreciate that and I'd actually spent an entire day doing that with each of them before producing this video, but no matter how much back and forth I did with them, they never improved with this unfortunately. It gave so little that I just didn't think it was worth including in the video.
Thanks for the suggestion though, I'll definitely be doing more and different tests and comparisons in the future.
Where horse STL?? 🐴
intro like in tv
It’s devastating and I hate it…
Modeling things in 3d is literally the must aspect of being a 3d Artist and now everyone without skill and dedication can create useless things like this…
comment to let the algorithm show me more videos like this :)
The meshes are horrific.
Have you seen my more recent video on 3D gen AI posted a couple of weeks ago?
I'm a fan of how you put this content together, but I can't fathom why you think running against 3.5 would result in content that provides any benefit for anyone.
I am sure you are aware of the utter night and day different in reasoning between the two. Hopefully you are also aware that even GPT-4o does barely any better at reasoning within a 3D parametric space. That video would have at least been useful and insightful.
This video was created and published before GPT-4o was even talked about publically.
Good one, GPT4-Turbo then. Thanks for wasting my time.
16:23 *_The_* human language 🤔😅
Human language as opposed to computer language, it's a common term. Similar to 'natural language' or 'natural spoken language'.
It's also popular for people who enjoy human music, like those classics, 'boop', 'beep', and 'ding'. :')
@@3DRevolution You make an interesting point. There's a phenomenon that if people know that someone made something with AI they will appreciate it less (often far less). This is then associated with it being "AI-made" and for some reason people forget it was a human using the AI to make it.
So, I wonder if it does boil down to this evolutionarily early concept of "us" & "not us." Like, people don't usually use disgust emotions to describe how something of decently high aesthetic quality. But disgust does usually come from a place of deep evolutionary biology.
I also wonder how disgusted people were at the industrial revolution.
Edit: And, what does it mean about those of us who don't devalue stuff for having been made with AI? 🤔
@@RubelliteFae are you familiar with the term Luddite?
Commonly used these days to refer to someone who dislikes or is afraid of technology or technological change/development.
It actually originates from a community of textile workers in Nottingham, England in the 19th century, who protested the invention of industrial sewing machines that they felt threatened their industry. They conducted raids and sabotage on machines and revieered a mythical man by the name of Nic Ludd from the 1700s, the namesake of the Luddites.
I feel with every technological or societal change we experience a modern version of this, and with AI it threatens to be both a technological AND a societal change.
@@3DRevolution Yeah, I knew about the Luddites and the term neoluddite. I suppose I had presumed that happened due to panic & fear, not revulsion. Sure, the former has been happening with AI, but the latter is what has taken me by surprise.
It's strange. We've known this revolution is coming my whole 41 years of living (with some even predicting 2020s as the timeframe), but so few seem to be prepared for it.
@@RubelliteFae it's difficult for the masses to truly believe change is coming until they see it, and if it's a drastic change, any tale that is foretold of it just seems fantastical.
But we need to see AI junk
I do appologise, did I not include enough AI junk here? I can always deliver more!
Try this as a prompt, Create a stand shaped like a gripping hand which is holding an invisible iPhone 11.
Nope, chatGPT doesn't have an understanding of what a rocket looks like.
It's a language model, not a general inteligence.
It's picked up a pattern in langague use.
It's spotted a pattern of setting variables in scrupts, so it's coppied that.
It doesn't "understand" what a variable is, or how it functions.
Computer code is a language, so it can produce something that convincingly follows the rules. That doesn't mean it actually works!
Much like if you get it to write an essay. Surface level, it looks impressive. But if you actually know the subject, it kinda meaningless.
You'd be better off seaching Stack Exchange. Which is pretty much what LLMs did when learning what code looks like.
Generative AI is a useful tool, but it has to be used very carefully. It's not an information resorce, as much as Google & MS are trying to push it into one. It cannot fact check. Using generative AI incorrectly can be dangerous.
Uh. despite my rambling, good video, & an interesting experement! :)
Haha I appreciate your rambling and your comment. I completely agree that it's not what LLMs are designed for or intended to do, but I thought it would be interesting to see how they'd perform.
that's the ugliest phone stand I have ever seen.
Those results are really bad for sites you need to pay for to get any real use out of them...
All the sites and services I covered in this video are available to use for free.
If people were to pay to use the services more, they'd only be doing so after having already used it for free and therefore would be confident that the quality of the output was suitable for them.
Also remember generative AI improves with use and further training. Some people may even subscribe to these services just to fund their continual existance as an investment for what it will become.
Look at how text to image and text to video generative AI improved in just 9 months. Paraphrasing Sam Altman from earlier this year, 'ChatGPT 4 is amazing and has is constantly aweing and amazing people, but ChatGPT 4 is the worst it will ever be'. The same could be said for these too. They will only get better.
garbage.
One man's trash...
@@3DRevolution is still trash.
Keep your spaghetti out of Will Smiths mouth!@!
I don't mean to show up just to drag a video but this popped up on my feed so I'm gonna throw this thought out there. Let this AI nonsense die, it's not good and I really doubt it ever will be. The majority of generative llm are just a scam, which is especially evident in the 3D generative tools you showcased. The fact that they're charging to create a mediocre and mostly unusable mesh with no consistency in output should be pretty telling. Add on the problematic creation of these generative products using massive amounts of unlicensed, uncredited, uncompensated work and you risk promoting exploitation. The idea of typing an idea and getting a 3D model is amazing, but the reality of what you get is far from that. Judging from the history of LLM companies it will never reach that idea either.
I totally get that some people aren't a fan of the changes that various AI is bringing, and I straddle the fence on it myself.
However, I'd be very interested to know what you mean by most LLM's being a scam. I use LLM's almost daily and they do exactly what they say on the tin.
As for the quality of these models, I do agree, the quality is not there... yet.
As I mention in the video, every tool I covered here is available to use for free (to a limit) right now. Of course they will charge for excessive usage as running an AI server farm costs a huge amount of money. But also they need to get as many people using it as possible because that is how it will improve.
They give people a chance to use their tools for free so people are only going to pay to use their service more if they've already tried it (for free) and are happy that the quality meets their requirements.
As I mention in the video, look at those horrifying and laughghable AI generated videos of Will Smith eating spaghetti. That was completely unusable and pretty much a joke and was the best we could expect at the time. 9 months later and SORA is announced demoing almost photo realistic minute long video clips.
It took like 3 days for the world to realise the pope hadn't suddenly become fassionable after MidJourney hit the internet, and that was almost a year ago with plenty of improvement since then.
This is early days for this technology and it's evolving quickly.
Yes there is questions around the training data but that's a whole separate conversation.
@@3DRevolution The training data is generally the forefront of the problem in my eyes, but moving past that to the note of calling it a scam.
I call it a scam because the technology has been "getting better" or "only improving" for years. The best I've seen is simply a different option in a production pipeline, often worse and rarely better. Especially when a company is selling an amount of generations. It's more akin to a mobile game with random lootboxes, or slot machines than an actual art tool
I think it's also important to note that these tools are never developed as a tool for artists. it's not meant to support but replace, and it's proven time and again that it can't do that. In the realm of 3D models generative tech skips to the end product and it's virtually useless, there is nothing of a functional 3D pipeline and I've seen no intent of taking that direction. The only successful application I've seen for any generative ai tech is to create internet spam (or worse)
@@gabrielspangler6964 The training data is a difficult one and I am divided on that. On the one hand I don't think they should be just taking other peoples protected work without permision, and as a creative myself I empathise with that side of things. However, equally I feel that on the whole, the various forms of generative AI aren't copying. If I knew that a photo of me had been used to train MidJourney, I could write a detailed prompt describing that photo exactly and run it through MidJourney 1 million times. It would never just reproduce the picture of me. It would create a new picture inspired by it's training data.
But the thing is, weirdly, that's just human. Everything we all produce is inspired both by our experience (aka external influence, which is akin to receiving a prompt), and inspiration taken from work we've seen of others.
No professional photographer got to where they were having never seen a single other photo by anyone in their entire life. They'd learned and studied the work of photographers they liked and that influenced their own work. The exact same could be said for writers, directors, programmers, musicians, the list is kind of endless.
Anyway, that's my two cents on the training data debackle. No it's not right to take other peoples work without asking, but also, no it's not copying it, it's just taking inspiration from it like any human creator. Yes I understand the technical mechanics of it are different, but that's the crux of it.
You say it's a scam and that it's been 'only improving' for years and you specifically referred to LLMs. ChatGPT, arguably the first true publically accessible LLM only came out in November 2022. So LLM's have only been around publically for 18 months, so they have not been publically around for years, let alone improving. Despite that, in that time they have improved greatly.
I think it may be helpful to hone in on what exactly you're referring to here. You refer to AI with regards to art, whilst you say "Let this AI nonsense die". From this I'm presuming you're reffering exlusively to generative AI because even if you're not a fan of what it can produce and do, AI in general has in those 18 months made monumental changes in various fields. From overtaking 3 decades of protein folding research in a matter of hours, to discovering new cancer treatments.
But even just keeping to the generative AI's such as LLMs and image generators, they have developed a lot and are becoming increasingly useful, especially LLMs. I have several friends who are very high end programmers with decades of experience and in charge of whole teams, who now use LLMs on a daily basis, helping with things like error finding, or even writing or theorising new code. The vocal capabilities which are also improving are even allowing LLMs to become a useful tool in things like speech and language therapy.
Again, I know the models generated in this video aren't award winning, but considering how early generative 3D models is in the game, I think it's incredibly impressive, and honestly in a year I wouldn't be surprised if you could get production quality designs out of some of these tools. Saying AI is entirely a joke based on the first generation of 3D models is like saying any great artist was a joke and would never amount to anything based on their scriblings as a 1 year old. This is early days and the technology is developing at a very quick rate.
@@3DRevolution I disagree with the concept that it's inspired like a human. These models are programmed using other's work. They are a product that is made by using images, writing, video, voice, etc. without permission. Any other product would be immediately flagged for that.
Yes I am specifically talking about generative ai. I've been following a lot of professionals in various industries and a general consensus among most of them, is that generative ai isn't helpful. Or at least isn't helpful enough to justify it's use. And there are far to many cases of creative professionals being replaced or paid less because of ai. Personally any generative ai I've been given to work from has been a hindrance.
I've heard the promises of what it can do if it gets better, but I've also seen the reality of what it's done. Suffice to say I'm unimpressed, and I don't think it's worth exploring more in it's current direction
Moms Spagetti?
Knees weak, arms are heavy
Disappointing .I will wait till it makes more sense.
The AI generation is quite hit or miss. Absolutely not giving you an end-game product yet, but considering what it's doing, it's still pretty remarkable.
Going from having nothing in the world like this, to being able to just write down a sentence of text and have a 3D model created for you, that is pretty game changing. Iterative steps, we'll get there eventually.
Ai art is trash
Are you showing the first response for each ai and didn't try multiple responses? The fact you didn't show multiple for each using the same prompts makes me think this is a bit of a wank advertisement
I ran multiple passes on each, spending pretty much a day on it to see if anything improved the results, including follow up prompts etc.
However, nothing seemed to improve the output in any way so I just used the first pass from each.
I didn't show multiple attempts for each firstly because this video had a lot to cover, and because there was no interesting or impressive results I saw no benefit to drawing out the more visually boring section of the video.
As for it being a "wank advertisement", I'm not really sure what you mean as I basically showed that this method didn't really work, and summaries saying that all the LLMs, plus OpenSCAD are great in their own right, but not for this purpose.
I'm sorry you didn't linger my video but assure you the results of these tests and comparisons were not curated or fabricated.
@@3DRevolution I appreciate the clarification there. I realise I sounded like a bit of a prick in my initial comment. Sorry about that
How about you just do the work and not use AI? That's what real artists do.
"ai" can't design anything.
ai for a phone stand??? cummon!!!!! it like using a hammer to open a peanut! use tools the right way plzzzzzz
Sorry I'm a little unsure what you mean.
Are you saying you think a phone stand is too much of a precision item for something as bruit-force as AI?