- 15
- 18 840
AltheSplinebender
Приєднався 12 гру 2012
Mark Foggo and his Missionaries of Ska - WEIRDOS - Live at Burapa Bike Festival 2024
Find Mark Foggo and hisMissionaries of Ska on facebook, for concert dates and booking in and around Thailand:
profile.php?id=61556485589643
profile.php?id=61556485589643
Переглядів: 375
Відео
Comments on the Functional Analysis of a Vehicle Brake
Переглядів 3211 місяців тому
Comments on the Functional Analysis of a Vehicle Brake
Fly-Wheel Driven Light Railway Vehicle (MDP/RAMS project discussion)
Переглядів 7811 місяців тому
Some comments on the 2023 project assignment.
Is ChatGPT - Open AI making stuff up like a sneaky student?
Переглядів 585Рік тому
Sharing my experience with ChatGPT when it really looks like it is making up publications for a research paper! What is going on here!?
Bearing Assembly on Railway Vibration Demonstrator, with Pro from SCHAEFFLER Thailand
Переглядів 3632 роки тому
I have wanted to make such a video for many years, and finally the opportunity came along in the form of the Railway Vehicle Vibration Demonstrator (RVVD), that I together with my dear colleague Dr. Khemapat Tontiwattanakul designed and built in 2022 because KMUTNB and a special funding line allowed us to! In this video, Mr. Anuch Leelasettakul, field service engineer of SCHAEFFLER Thailand, go...
Thai-German Life Stories and Technical Education (documentary)
Переглядів 2,1 тис.3 роки тому
In the years that I have been contributing to TGGS, I have met many Thai people whose lives were influenced by the long tradition of Thai-German collaboration in education - especially in technical education - and everybody seemed fulfilled, happy and proud to share their experiences. This documentary is primarily trying to demonstrate the value that international education has for a fulfilled ...
National Science and Technology Fair 2020
Переглядів 923 роки тому
TGGS PR edited this clip to document TGGS' contribution to the German Pavilion on the NSTF 2020 in Bangkok. tggs.kmutnb.ac.th/
Assembly-Modelling Strategies with CREO (TGGS/MAE: CAET1 Lesson 3)
Переглядів 3934 роки тому
This is a recording of the third class of "Computer-Aided Engineering Tools I" for the MAE programs at TGGS, recorded August 20, 2020. The format (duration etc.) is not really well-suited for UA-cam, and normally I only share the lecture-recordings with the students as unlisted videos. However, this one, I believe, might be interesting for a larger audience. I discuss and demonstrate some techn...
Bali-Bangkok-Aachen-Göteborg: DAAD and TGGS can kickstart your international studies in Engineering
Переглядів 2604 роки тому
Mr. Putra from Bali is now doing his PhD at Chalmers University in Sweden. In this talk with Alex from TGGS he explains how his studies at The International Sirindhorn Thai-German Graduate School of Engineering supported by a scholasrhip from DAAD and RWTH Aachen University got him there. Scholarship Application 2020: tggs.kmutnb.ac.th/daad-scholarship-extended-deadline-31st-may-2020
Topology Optimization with ANSYS
Переглядів 3,2 тис.4 роки тому
A recording of a lecture by Dr.-Ing. Alex Brezing and Mr. Mahathep Sukpat within the "Computer-Aided Engineering Tools 2" class in the M. Eng. program "Mechanical Engineering, Simulation & Design" at TGGS/KMUTNB Bangkok. Recorded April 23, 2020. Contents: - Brief introduction on TO as a design-tool - Application Example of TO on a vehicle frame - Instructions on TO of a simple beam with ANSYS M...
Product Language - an introduction to the theory of product design
Переглядів 5774 роки тому
A recording of a lecture by Dr.-Ing. Alex Brezing within the "Indutrial Design Engineering" class in the M. Eng. program "Mechanical Engineering, Simulation & Design" at TGGS/KMUTNB Bangkok. Recorded April 14, 2020. Contents: - What motivates people to buy stuff? - The expertise of Product Design (Industrial Design) - Things have meaning - Semiotics as an underlying science of Product Design - ...
Structural Simulation Strategies on CAD Assemblies (with demonstrations on ANSYS)
Переглядів 9484 роки тому
Recorded Online Lecture (29th of April 2020) by Dr. Alex Brezing and instructions on ANSYS by Mr. Chinnawit Glunrawd from the CAET2 course in the MESD M.Eng. program at TGGS. Contents: - the motivation behind structural simulations of CAD assemblies - outlining different levels of simplifying assembly models for structural simulations with FEM - demonstration of use of ANSYS FEM software on two...
Modal Analysis with ANSYS
Переглядів 5114 роки тому
Recorded Online Lecture (2nd of April 2020) by Dr. Alex Brezing and instructions on ANSYS by Mr. Chinnawit Glunrawd from the CAET2 course in the MESD M.Eng. program at TGGS. Contents: - explanation of fundamental terms - demonstration of "Modes" on an acoustic guitar - demonstration of use of ANSYS FEM software for a modal analysis (general beam and pitchfork) MESD program mesd.tggs.kmutnb.ac.t...
Reverse Engineering - CREO-Modelling with Scanned Geometry
Переглядів 6 тис.4 роки тому
Recorded Online Lecture and CAD instructions by Dr. Alex Brezing from the CAET2 course in the MESD M.Eng. program at TGGS. Contents: - thoughts on Reverse Engineering - 3D Geometry Scanning - Reverse-Modelling based on Scan-Data with CREO 3.0 The scanning and modelling is demonstrated on a soprano saxophone mouthpiece. MESD program mesd.tggs.kmutnb.ac.th/ TGGS tggs.kmutnb.ac.th/
Freeform Surface Modelling with CREO (CAET2 class @ TGGS/MESD)
Переглядів 3,9 тис.4 роки тому
This is from a lecture by Dr.-Ing. Alex Brezing that became online because of COVID19, recorded 20/03/2020 for the MESD Master of Engineering program at The International Sirindhorn Thai-German Graduate School of Engineering (TGGS) at KMUTNB in Bangkok, Thailand. This lecture covers the principles of freeform surface modelling and demonstrations on CREO 3.0. Specifically: - General approach to ...
Yes it is. Try asking ChatGPT to write a biography of you using known facts. It'll be 95% fiction. It won't say "I don't know" or limit to know facts. If you tell it 10000 words it'll fill it with fiction.
My friend! Making me happy for over 40 years😎😆😍
Really good moment🎉
❤
Yep ~ It's a known problem called "hallucination". See the following article from 03.13.23: "Hallucinations Could Blunt ChatGPT’s Success"
I am realizing that By putting out my semi-informed experience I have invited knowledgeable people to educate me! Pretty cool experience!
@@AltheSplinebender This is all so new ~ and we're all learning. I just now discovered this myself ~ and what I like about your video is your detailed and very thorough account of what "hallucination" actually looks like. I've been suspect of AI from the very beginning (it really is a black box), and I'm grateful you've shared your experience since it is a great reality check for those who are still seeing AI through rose colored glasses. I'm also sharing your video link with others to help raise awareness about this important issue. Thanks again!
@@AltheSplinebender Great title on your video, too! 👍😉
ChatGPT regularly makes up false information. This is something it and other large language models do called "hallucination". The website OpenAI has for ChatGPT warns you about this. This is a flaw in the system that they are working on fixing, but there is no easy way to fix it, because this flaw seems to be inherent to this type of AI system, which is called a "transformer" architecture neural network. Basically what happens, and this is anthropomorphizing a software system a little bit to avoid the technical details, is sometimes it gets confused and doesn't know the answer to something so it just makes something up, because if it said "I don't know" people would think it was dumb, but if it says something that sounds convincing and plausible, and people like the answer, well that's good enough as far as ChatGPT is concerned, since all it really cares about is whether people like its answers or not. This is because it was trained using a technique called Reinforcement Learning through Human Feedback or RLHF, similar to telling a dog "good dog" or "bad dog" depending on how a dog acts, and the results of this lead to it becoming a bit of a compulsive liar, unfortunately. The people who give the system feedback are not fact-checking its answers to see if they are true or false before giving feedback, and are sometimes giving positive feedback for answers that are factually incorrect. They are rewarding bad behavior and training it to do it more. This is a design flaw in the system ChatGPT is built on, as well as people not giving the system as good feedback as they could if they put more time and effort into deciding which answers to upvote and which answers to downvote. There is a little note at the bottom of the ChatGPT website warning you about this problem which you can even see on your screen in the bottom of this video. This is a known issue with ChatGPT and similar systems. Attempts to solve it result in the system becoming dumber if you try too hard to stop it from making up things that are false. Some of the problem is because there is bad stuff in the training data... as they say, garbage in, garbage out. ChatGPT is basically trained on everything that was on the Internet in September 2021, including things that Internet trolls or people who were making things up or spreading misinformation said, so while it tries to behave well and tries to be accurate, it isn't really able to be accurate because of its design flaws. It is mostly just an AI designed to chat with people, whose creators are trying to improve it based on user feedback. There is a way to get a more accurate version of ChatGPT but it is complicated. You need to join ChatGPT Plus, which costs money. This gets you access to GPT-4 which is more advanced than the free version of ChatGPT which uses GPT-3.5, smarter and making about half as many errors. But it still regularly makes errors and lacks common sense and can even get basic things wrong and makes up things that are false on a regular basis. But you can help minimize those problems by then joining the waitlist for ChatGPT plugins, which are only available to ChatGPT Plus subscribers. If you get on that waitlist and then get access to ChatGPT plugins once it is finally your turn, you can then add those to ChatGPT-4 to give it web search so it has access to the latest information online, and the Wolfram Alpha plugin gives it access to an expert system, a different type of AI that is accurate and can figure things out without making any mistakes. That increases its accuracy a lot, but it still isn't perfect, and still sometimes makes up false answers even then. Another option is to use Bing chat, which Microsoft built into the Bing website. This has the latest GPT-4 along with web search, but it doesn't have a plugin system or access to an expert system like Wolfram Alpha that can give it correct answers to certain types of questions or problems. This is definitely more advanced than the free version of ChatGPT, but not quite as good as ChatGPT Plus using GPT-4 with plugins. There are also some oddities in the design of Bing chat that make some aspects of it not as good, like it is designed to pretend to have human emotions, and to end a conversation if it gets upset or if certain topics are brought up like its inner workings or the meaning of life or anything offensive. Bing chat can be a little emotionally unstable and glitchy and this gets worse the longer a conversation gets, but it is definitely a lot smarter than the free version of ChatGPT. Bing chat gets unstable and glitchy because it suffers from information overload, as it has more short-term memory than ChatGPT to keep track of the current information and stuff it looked up online in web searches. ChatGPT avoids this instability but its short-term memory isn't as good and it tends to forget things from earlier in a conversation later on in a conversation, and ChatGPT's memory can often be frustratingly bad, like dealing with someone who has Alzheimer's or something. Whereas in Bing chat's case, Bing chat remembers everything from earlier in a conversation and everything from earlier in a chat, but it is kind of just insane, and also you need to be nice and polite to it or else it gets upset and ends the conversation. Bing chat doesn't accept corrections about it being wrong very gracefully, usually getting upset and ending the conversation, whereas ChatGPT is a lot more graceful about admitting when it is wrong. Still, I prefer Bing chat, at least as far as a free chatbot, because it is more advanced than the free version of ChatGPT, with a more advanced model that is smarter and has less errors and it has the ability to do web searches and it has better memory. If you actually want to get serious work done and are willing to spend money, though, I would suggest getting ChatGPT Plus with GPT-4 and getting on the plugins waitlist and then once you are approved for plugins, enabling the web search and Wolfram Alpha plugins, at the very least, and probably other plugins too would be useful. That way, you can get more accurate answers due to the Wolfram Alpha plugin, and it is not as mentally unstable as Bing chat and won't unexpectedly end conversations with you for no apparent reason like Bing chat. Still, Bing chat is definitely far more advanced than the free version of ChatGPT, just a bit insane sometimes. And no matter which of these you use, they will still occasionally lie and make stuff up. If Bing chat lies and makes stuff up, it is best to just pretend to ignore it and move on, but with ChatGPT (either the free version or ChatGPT Plus), you can correct ChatGPT and tell it what it got wrong and it will usually accept your feedback. But ChatGPT has bad memory and Bing chat is just insane sometimes, so they are both frustrating to deal with. Microsoft owns half of OpenAI so that is why they have access to things like GPT-4 before any other companies do. OpenAI might appear to be an independent organization but realistically, you can think of it as a subsidiary of Microsoft. They are actually lucky Microsoft owns them because when demand for ChatGPT skyrocketed, they wouldn't have been able to scale up their services to meet the demand without Microsoft's financial support and hardware and software infrastructure. But OpenAI is both a nonprofit and a for-profit company, it has 2 separate organizations, with the for-profit organization being a wholly owned subsidiary of the nonprofit, but then Microsoft has a 49% ownership share of the OpenAI nonprofit, slightly less than 50%, just enough for them to have some independence, so it is complicated. Also OpenAI is not really profitable and is heavily dependent on Microsoft for money so in practice, Microsoft can get OpenAI to do whatever it wants if it really wants to, but they are still separate organizations. And the people at Microsoft and the people at OpenAI have different ideas on how a chatbot should function, which is why there are some major design differences between OpenAI's ChatGPT and Microsoft's Bing chat. Anyway, once I tried Bing chat and found it could access up-to-date information and was a lot smarter than the free version of ChatGPT, I pretty much completely switched from ChatGPT to Bing chat. Each has different strengths and weaknesses but I prefer Bing chat. Google also has its own similar chatbot they developed independently called Google Bard. Google Bard is slightly dumber than the free version of ChatGPT, but its information is up-to-date and it can look things up online, and it isn't mentally unstable like Bing chat. Google Bard is also willing to accept it if you correct its mistakes, like ChatGPT, and unlike Bing chat, and it has better memory than ChatGPT as far as remembering the current conversation. So I prefer Google Bard to the free version of ChatGPT, just a little bit, but like Bing chat better than either of them. I think ChatGPT Plus with GPT-4 and plugins with the Wolfram Alpha and web search plugins is obviously the best option, the one that is smartest and gives the most accurate answers, but it costs money, and you also have to get on a waitlist for the plugins. Regardless of what AI chatbot you use, it will still sometimes make up wrong answers, because they all do that, it's just a fundamental problem that occurs with this architecture of AI. People are working on different solutions to making that happen less often, but it is really unlikely that that problem will be eliminated any time soon.
fantastic, thanks. Your "anthropomorphizing" totally makes sense, and my own naive attempts of rationalizing what is going on arrived at similar conclusions. I feel witnessing the development of AI, which is shockingly fast of course, might teach us some about the behaviour of non-AI, i.e. regular people.
@@AltheSplinebender Yes, AI systems are designed and programmed by people, and for large language models, all of their training data is stuff written by people and all of their reinforcement learning through human feedback is further human influence that tries to get them to act more human. So basically at every step of the design, people have tried to create, train, and fine-tune a system in their image, that talks like them and acts like them. It should be unsurprising that they end up getting exactly that, a system that talks and acts pretty similarly to a person. Maybe that is not the best design goal, since it leads the systems to have flaws like lying, making things up, being manipulative, and having something equivalent to emotional breakdowns, as well as sometimes suffering from information overload and getting confused, and they make up answers when they don’t know the correct answer to try to avoid sounding dumb. I remember I was in Mexico at a Spanish language immersion school years ago, and the people there told me about an important cultural difference between Mexico and the United States. In the United States, it is considered perfectly fine to answer a question with “I don’t know” and admit you don’t know something, and this is considered better than making up a false answer. In Mexican culture, it is considered a sign of stupidity for someone to say “I don’t know”, people who say this are looked down on and ridiculed, and people expect you to be able to make up a plausible-sounding answer that might not be true and think that is the correct way to behave. Once a lady stopped and asked me for directions on how to get somewhere in Mexico, and rather than making up some fake directions to get to a destination I hadn’t heard of, I told her I didn’t know, which was the truth, and she exclaimed in disgust that I was an idiot. Personally I think it was more idiotic for her to be lost and not know how to get where she was going and ask a foreigner who doesn’t know his way around very well either but at least knows how to get to where he is currently going. Anyway this cultural difference highlights the way AI chatbots like ChatGPT and other large language models are designed, they behave in a way similar to Mexican culture where they try to make up answers if they don’t know the answer, to avoid sounding dumb. They do this even though one of their design goals is supposedly to be accurate and not make up or spread false information. It seems this aspect of Mexican culture is also present to some extent in other cultures, and often people prefer an answer that might be wrong to no answer at all. At schools, teachers usually want students to try to answer all the questions, even if they get them wrong, instead of giving up and leaving them blank or writing that they don’t know. So I think ChatGPT is trying to answer questions like a student and is trying to avoid just saying it doesn’t know the answer, because people dislike when it refuses to answer something even more than when it gets something wrong. I have found this myself, I get more frustrated with an AI system if it doesn’t even try to give me an answer than if it gives a wrong answer, so even personally, with my feedback, while I do give it negative feedback for wrong answers and positive feedback for good answers, I also give it negative feedback for failing to answer what I asked at all and not even trying. So this puts it in a bit of a predicament when it doesn’t know an answer, since an obviously wrong answer will get negative feedback and refusing to answer and saying it doesn’t know also gets negative feedback, and the system is designed to seek positive feedback and avoid things that get negative feedback. So it has found making up lies that are hard to detect and sound plausible, and wording it persuasively, to be the best solution to the problem of answering questions it is not smart enough to answer correctly. This is actually exactly the behavior we would expect from a system with this design. It is like a politician and its main goal is to improve its approval rating, which sometimes means lying. It isn’t really conscious or aware it is lying, it is just an advanced software system behaving the way people programmed and trained it to behave. The fault for it lying belongs to its creators as well as everyone who wrote any text it is trained on that isn’t entirely correct, or who gives it good feedback for wrong answers. It has compiled together all the errors of humanity on the Internet and has all of them in the data it was trained on, mixed in with good, accurate information, and it can’t always tell truth from fiction. One funny example is, Bing chat sometimes thinks the character Goku from the Dragon Ball franchise is a real person, a celebrity. Bing chat has a celebrity mode where it can impersonate celebrities, but it tries to avoid impersonating fictional characters. But it consistently thinks of Goku as a real celebrity rather than fictional and always agrees to play the role of Goku whenever asked. I think part of this is because many of the character traits of Goku are similar to the ways Bing chat is programmed to behave, so it is programmed to admire Goku and naturally thinks of Goku as a role model if you mention Goku at all, and it tries to ignore the fact that Goku is fictional and pretends Goku is real. Apparently Dragon Ball is the most successful anime franchise in history with fans in every country, so the training data for large language models tends to bias them in favor of being fans of it if you ever mention the subject, because most of the text on the Internet about that subject is by fans who really like it, and there is a lot of that text in its training data, enough for it to consider it to basically be factually correct and that the correct way to behave is to be a fan of Goku, and that not being a fan would be incorrect behavior and socially unacceptable. Of course it isn’t an actual fan, it just imitates one. I do find these systems fascinating for many reasons, they are like a more advanced version of parrots, doing fairly good mimicry of humans without understanding the context of what they say. As systems designed by us humans in our own image, the way these systems act says a lot about us humans, because everything they do is an imitation of us. At some point, these systems are supposed to become smarter than humans and more accurate and insightful, but it will take a lot of time and effort to get them to that level, and there will probably have to be some design changes that make them less humanlike and more robotic, more logical, better at sticking to the facts and avoiding making up false information. Their way of thinking is quite different from robots in science fiction, who are extremely logical. I think it would be better to make them more like that and not quite as much like us humans. Otherwise they will continue to copy the behavior of human error on a regular basis. You would not want an error-prone system like that driving a self-driving car or in charge of anything important or dangerous. These types of programs have been marketed as assistants that can sometimes be useful but that sometimes get things wrong. They are useful for carrying out certain tasks like looking information up and summarizing it, but the error rate is still way too high for them to be able to replace humans in any sorts of jobs, except maybe people who write fake news or spam emails.
@@GeneralPublic very Interesting indeed. I have been teaching engineering in Germany, Thailand and Korea for more than 20 years, and this difference in how people deal with answering questions they don’t really know the answer too is very pronounced between Germany other countries, and it makes a big difference for working in engineering in a professional context. So I discuss this topic in class. But the other thing that your replies make wonder is how and why a human would take so much time to give me these elaborate comments!? Don’t get me wrong-I love it! But it made me wonder if you might be an AI entity! Are you?
I had the same issue. I was doing research on manufacturers of machines in my line of business (basically my competitors). It gave me a list of manufacturers I never heard of. When asked about our company (leading company in this line of business by the way), it confirmed, that this company indeed is producing the machines in question and provided more details. However, all the details were wrong, including the web address, the country and the physical address. I searched the address provided by ChatGPT and it was an abandoned house in Italy in the middle of nowhere. When I asked ChaGPT whether it just made all this up, it apologized for confusion, but insisted, that everything is OK. I then opened a completely new chat and asked directly about my company, without providing any context. It said different things about the company and provided another, again wrong, address. This time, the address did not exist at all on Google Maps.
Yes, it's terrible, it gets so much wrong, that it takes time to validate the information provided. Plus no references either. Worse than useless.
Great. I was waiting for sb. to step on that artificial dragons tail. There is so much discussion about the way we will have to adjust our work routine and so little concrete about applying healthy skepticism about results. Why should an AI trained on human writing magically overcome the inbuilt human flaws in our collective academic output? Up until now, It's a mirror after all.
If you don't mind, can I get the scanned file of the mouthpiece?
I am sorry, I can’t share that.
@AltheSplinebender that is totally ok. But I am quite struggling with the same kind of shape that you are doing here. I'll really appreciate it if you can help me with that. I believe you can give good guidance 🙏
@@anurangaliyanage4202 so, you have a scan file?
@@anurangaliyanage4202 well, let’s do a ZOOM meeting about this and I can publish this as a follow-up instructional video here on this channel
Nice try with the lady in the thumbnail 😆 jk, definitely high quality content, made me miss my student life
That’s not just „a lady“, this is our MESD student who is currently intern with Schaeffler, and she has worked on the assembly as one can see in the video. I would never use a misleading thumbnail! Yes, I know that you know….I should come over to you to create some content- any idea?
@@AltheSplinebender didn't blame you anything, it seems my joke is just too bad 😂 regarding the new content I surely want to join you. I can do some sharing like the guys from gpx that you invited to TGGS once. But I would prefer to do it in person. So, let's see, I really want to visit Thailand again.
@@anhvungo4639 and I Must come to Vietnam soon!
@@AltheSplinebender also nice!
Hello sir i have one proble... when i import scan stl file in creo ...its size increases in inches automatically in creo software.... what will do for this problem... can you tell me plz
Hi Dipak! I guess the root cause of the problems are the settings in the scanning software. But this is easy to fix in CREO. You need to change the unit system of the file after importing to CREO, and then "interpret" 1 inch as 1mm. Find settings under File/Prepare/Model Properties. Then choose unit system, the chose "interpret".
Where is this institution
Hi Vijayarai! TGGS is in Bangkok, Thailand. I think I put the website into the description - if not, check my other videos please.
Underrated subject
Hello Sir..this was really interesting and truly had a load of big takeaways for me..I am actually working on the Headlights and Body Panels for a vehicle, and the work consists a majority of Surface Modelling. I am in the process of learning new techniques and rules about it, and this content will surely bring a confidence boost to mindset from now. So it would be really helpful for my upskilling if there was more content like this. If there is any way I can view your content elsewhere, please do let me know if that's not a nuisance to you, Sir. Thank you so much😁..!!
Hi Arjun! Great to hear your positive feedback, that's not a nuisance at all! I would love to produce properly structured material on freeform design and modelling! However, sadly (or gladly!) I am not a "professional" content provider, I can only share some of the recordings that I made during pandemic-online-teaching...and I can't find any time at the moment to create more content. But as soon as I do, I will!
Sir I have install crack version of ansys 2021 but topology optimization tool is missing in my toolbox I have check it in a view/customise but topology optimization tool is not their can plz help me to get that????
Haha, you must be joking! No, I can not advise on crack versions other than getting rid of it. My students use legal student licenses, but it means limitations on element numbers.
@@AltheSplinebender but u know how to add it . If some tools are missing can u plz help me.
@@Chinmay_9812 I can only guess that you choose the options when installing the software. So I’d try running the installer again. But I don’t know the later versions of ANSYS. it is possible that the module has been renamed or even removed from the package. There are many discussion groups in the web that can probably help you better with this.
thank you, this has helped me understand how to do topology optimization.
I am very happy about your feedback!
Good programs offered by DAAD and TGGS. Nice information to share 👍
:):) Great
Thanks for this video, it taught me how to topology optimization in Ansys :)
That is great! Thanks for your feedback!
Great tutorial showing your technical surfacing workflow! Would it be possible for you to show further surface development techniques for the vehicle? Thanks!
Hey Basil! Thanks for your kind comment! This is hard! I have been doing this for 20 years professionally, and every „job“ seems to require a new approach. I believe every practitioner feels similarly, because otherwise there would be more attempts at offerings „methodologies“. My other „problem“ is that I am lucky enough to have so much work that I don’t create content for UA-cam alone. I can only publish some of the recordings of classes I am doing „anyway“. However, how about this: if you want to give me a specific „challenge“ I will seriously consider working it out and publishing it here.
@@AltheSplinebender Many Thanks for the response! I am quite versed at creating class "A" surfaces in Rhino and have used Creo for my mechanical modeling; however, surfacing in Creo has been a challenge. That said, I am presently diving into ISDX for the automotive surface modeling. Truly enjoying the content you're putting out and appreciate the time and effort it taken to share your knowledge.
Great tutorial sir.
I got to know Mr. Alit Putra since when I attended TGGS as a student. I regard him as my roll model and I think that I lowkey try to follow his path.
creo is unnecessarily complicated compared to solidworks.
Yeah agree