An Actually Big Week in AI: AutoGen, The A-Phone, Mistral 7B, GPT-Fathom and Meta Hunts CharacterAI
Вставка
- Опубліковано 28 вер 2023
- From dramatic new use cases for GPT Vision, Meta bringing language models to billions of people, Autogen as the new AutoGPT, to what I’m calling the Altman Phone, this is a huge time in AI. I’ll also cover Mistral’s 7B model, the new CIA-bot, Orca potentially replacing OpenAI models at Microsoft, and yesterday’s fascinating GPT-Fathom paper.
www.metaculus.com/ai/?...
/ aiexplained
Chapters:
0:35 - GPT Vision Use Cases
1:32 - Meta AI to 4 Billion People?
3:00 - CIA-Bot
3:48 - The Altman Phone
4:47 - AutoGen
8:26 - Mistral 7B
9:58 - Orca @ MSFT
11:20 - GPT-Fathom
GPT 4V Agent: / 1707480439793840402
GPT Vision UI: / 1706823089487491469
Meta Models ft. Mr Beast: about. news/2023/09/int...
Character.AI Valuation: www.bloomberg.com/news/articl...
AutoGen: www.microsoft.com/en-us/resea...
Altman Tweet Timelines: / 1705752292484624863
The Altman phone, The Verge - www.theverge.com/2023/9/28/23...
CIA Bot: www.bloomberg.com/news/articl...
LLMs for Censorship: www.lesswrong.com/posts/oqvsR...
PRISM: en.wikipedia.org/wiki/PRISM
Mistral 7B: mistral.ai/news/announcing-mi...
Perplexity Labs: labs.perplexity.ai/
Orca at Microsoft: www.theinformation.com/articl...
My Orca Video: • Orca: The Model Few Sa...
My Phi-1 Video: • Phi-1: A 'Textbook' Model
GPT-Fathom: arxiv.org/pdf/2309.16583.pdf
/ aiexplained Non-Hype, Free Newsletter: signaltonoise.beehiiv.com/ - Наука та технологія
For more on GPT Fathom, check out:
twitter.com/shen_zheng25741/status/1713053733939015935
github.com/GPT-Fathom/GPT-Fathom
Your videos make it easier for me to feel like I’m somewhat «in the loop» with this stuff, and I find them so interesting! Thanks for keeping us updated 👍
Thanks Blah
All blah, no harah
I definitely agree! Your research is very appreciated! :)
I love how you had to clarify this week is ACTUALLY big, since every week in AI is massive and somehow this one is just crazier than usual :3
@@Writivitewhat’s with this hostility are you a prick or smth
@@Writivite 那好信息技术欧蓝德法师哦那, 八色的
@@Writivite :3
@@Writivite:3
@@Writivite :3
Holy, that web designer is crazy. It’s clear open ai is doing just more than instruct fine tuning and regular fine tuning. They must have custom templates that train a model to respond to multiple queries. There is no other way they can get these chatbots to preform so good on multishot performance.
Eh, I can tell you quite a few folks have pipelines that perform as well with open models which are cheaper than OpenAIs APIs.
@@terebat. can you tell them! I would love to test them out, are any of them open source like their methodologies?
@@terebat. Can you tell us what are those? I would love to try them!
@@gigiosos1044 yeah I seriously don’t know of a single one and I can’t tell if this guy is talking out of his ass or serious (only because he didn’t respond with any useful information). Because I would really like to try one. Using oobabooga I get terrible multi shot performance with any non llama 2 chat model.
Hey Philip! Just wanted to tell you how motivating your videos are for me to learn AI. I'm currently self-studying on that subject, and you have been a big part of reminding me why this is the right path, through just being a thorough researcher and deep diving into the latest news on the topic. Thank you for that :)
Thanks maciej, means a lot
@maciejbala477 any suggestions on how to start from scratch?
I would also like suggestions on how to get started
I started teaching myself BASIC on a C64, from an awful manual, at around 7 years old. No one, except a few mates knew anything much about computers then, or not in our world. Then I studied Computer Systems Engineering in the mid 90's around the 486 epoch. The computer/brain analogy was around, and I was often asking lecturers whether software could be written that closely mimicked the brains design and feedback loops etc. I very quickly realised the hardware limitations at the time. We were all still trying to figure out how Moore's law was going to pan out. Watching the growth over the last decade was certainly interesting but to see where things are at now is just.. insane. I wondered how the world, and my parents, were going to handle the transition into the age where everyone would own a PC. It was pretty dam bumpy for the computer illiterate people my age, and almost impossible for my parents generation. Now Im trying to keep up with with AI advances and its getting really, really hard. My old PC literate piers have mostly given up and Im one of the few who is still trying to stay informed. So, I guess my point is that even with a MASSIVE head start in computing, I am still most likely going to be completely left behind by this technology, and possibly by the world in general. If my parents found it a nightmare, then my generation is in big trouble. I myself might survive a bit longer, but I should really start looking at buying a cave with decent Wi-Fi.. Starlink maybe.. But wherever I am I'll still keep watching your channel; through my laser-satellite assisted eyeball interface.. or whatever. And I wish everyone subscribed here the best of luck because we're ALL going to need it.. ..what a crazy time to be alive. xx
Epic comment, and thank you for sticking around
I guess one thing to balance is the granularity of knowledge. There is always new stuff happening but a lot of it is improvements on existing things. Some of that you can ignore while paying attention to the bigger things. Like Gemini will probably be a big deal. I’m comparison, Mistral while cool is “just” another open source model that is better at smaller size.
I guess moving to a third world country could be a way to continue living in the "old way" for a least one generation.
After that I predict everyone in the world is gonna be pretty much a slave having to act in the way the govermebt decides they should.
But everybody will probably be rich in stuff, food, medical care and be very safe.
A perfect prison...
Don't worry too much friend. Like you said, you have a great head start. On top of that, you gotta remember this tech is going to be a lot more intuitive, not to mention you can literally ask the tech to explain itself to you. One of the first things I did with GPT 3 was ask it to explain to me how it works.
@@jdata This is actually a good point that I ponder a lot.
Some uses I've made for GPT-4V:
Take a jpg resume, extract *all* text properly, use ADA to sort the events by date, and create a visualization of the timeline of the candidate.
Identify a movie from a frame.
Guess what city is in the photo.
It's incredible.
Self improvement with Vision is just crazy and now that it got the ball rolling ... Who's to say what it'll be capable of down the line.
Some find it scary, I find it hope inducing.
hope inducing? hopeful?
It's almost an adversarial approach,.but involving a transformer based language model. Funny how that goes
Next video in chat gpt
@@nickb220Don't be so much like a pedant.
please be both. You have to be both, like that's the only correct approach
That self improving gui code is a genius idea, there so much you can do with that, let alone all the other possibilities self improving vision as to offer.
We are getting ever closer to the point where we can just point at a competitors app/service etc, and just say build that.
No IP laws will be relevant. Not sure how competition adapts to that.
@@dakara4877 i know, wins the one who has the best model xD
@@knoopx Yes, I've been telling people that AI is the end of all middleware. Middleware is just something in between what you need or want. AI will simply give you what you want and 99% of businesses are just middleware in some form.
Thank you for always putting your sources into the video description. This helps me tremendously with my own research since I can pickup on the inspirations that I get from your videos without having to search around first.
Yw Dimension, here to serve
My favorite use case for AI that I haven't seen is integrating it with augmented reality glasses and using it as a virtual assistant. As somebody with ADHD I would find it extremely beneficial to be able to just talk to a system that's able to see what I can see and use it to document things for retrieving later.
I could use it to plan my schedule, store information that I need to reference later, and many more things I'm not even thinking of right now It would be an incredibly helpful device for anybody who has any issues with executive dysfunction.
@@fontende wtf are you even talking about? your comment has absolutely nothing to do with the parent comment.
Check out Meta Raybans, definitely getting closer
John, I agree with this 100%. It is something I’ve been wanting to work on myself, though of course it is hard because of… executive dysfunction lol.
You are the number one source for AI news. Keep it coming.
Thanks UnderDog!
Little feedback the red highlighting with white text is not clearly readable on devices with smaller screens than a laptop screen
Good to know! Someone commented before about blue, any suggestions for best colour!
What a week huh? Thanks for keeping us up to date :)
ChatGPT with vision improving Dall-e outputs would be super interesting. It's almost an adversarial approach,.but involving a transformer based language model. Funny how that goes
As a Microsoft web engineer, we do look at OpenAI as a partner but also a competitor since we compete for some of the same traffic.
Thanks for the comment Oculus, interesting
funny how lex and zuck look more alive in their vr form
Your videos are really amazing and high quality. Can’t imagine how much effort you put in all these videos. Reading all the papers, researching, benchmarking things yourself. Keep going! :)
"The video is long enough" it felt like 22 seconds I need more 😢
Haha nice, thanks Roma
Serious suggestion, if an "Altman Phone" is being developed, I want it to have a coverable (or even better, no) selfie cam.
Security can only do so much to work around a solid wall, and under-screen cams, non-stick glass screens and so on make it difficult to cover the damn things.
I'm only 22, this ain't old man syndrome talking here.
I think this might turn into a trend where big companies will try to make a LLM for a specific tast like maths and make it as cost efficient as possible. Then use agents like autogen to connect all of the LLMs that have specific tasts it performs.
It's going to be wild seeing software developers basically creating and staffing entire "virtual" companies with models and agents. The Sims for enterprise!
I could totally see that. Although then we'll need another benchmark to assess if that manager is correctly assigning the tasks!
@@mybluesock Soon: System message: "You are a virtual agent tasked with implementing and monitoring management KPIs." 😂
Thanks for your inquiries on AutoGen! Looking forward to what comes next.
Thanks juan
Beautiful especially the logic with gerald and you and the demo. Subagents and breaking down compositional problems/challenges.
I like the insight into the model impact you deliver in the video versus just reporting developments.
Thanks beaumac
Love this video! That's Philipp! I'm really interesting in learning more about the last paper and autogen!
An AI phone could be huge for accessibility. Imagine a phone which doesn't require users to interact with a screen or even learn how to use it.
I'd imagine the Sam Altman phone would be a disk with a camera on it that you can wear around your neck, which can also be clipped onto your trousers or possibly the wrist like a watch.
This is hands down the best channel to stay tuned on the AI topic that I know of.
I recommend checking out Two minute Papers, he does AI/Machine learning focusing on computer graphics etc. Theres also fireship but he's more programming focused :)
1. Recent AI developments have significant implications from various perspectives.
2. GPT4 vision can recreate user interfaces and improve on them through visual feedback.
3. Meta has launched 28 AI chatbots featuring celebrities, targeting up to 4 billion users.
4. The CIA and FBI are developing their own chatbots using large language models.
5. Johnny Ive and Sam Altman are working on OpenAI's first consumer device, the "Altman phone."
6. Autogen from Microsoft can transform and extend large language models through multi-agent conversations.
7. Large language models can be used for State censorship, as seen in China.
8. The Altman phone may have features that mitigate the addictive nature of technology and rely less on screens.
Awesome update as always. Thank you!
Thanks Jimraa!
Good stuff mate. Thanks for staying abreast of it.
As always: Thanks for your amazing work.
I can't fathom how fast everything is going
GPT Vision and Autogen working together would be interesting to see, even more interesting if one of these smaller models like Mistral AI had this ability while running offline on your computer.
Thanks so much Sean for your ongoing support
Autogen seems really big. I am really interested to see how this type of multi-agent techniques evolve over the next couple months.
Your videos are a gem. Thanks for summarizing all this interesting and crazy AI news.
Thanks Assailant
As someone in a crazy week, your updates are invaluable. Ty friend ❤
Thanks man
Hey man you're cranking up the production rate! :D Cheers
EDIT: Yes, I agree with your appraisal about Microsoft trying to get away from OpenAI models. I noticed that at Microsoft Inspire when they very deliberately put OpenAI model alongside open source models on Azure. Writing's on the wall.
The world is cranking up it's AI lol
@@aiexplained-officialSo exited for the future that I'm cranking it right now
@@David_Boxgreat job a.i. troll😂🎉
Yes it sure is Mr smarty pants.
We are living in amazing times where there is so much happening in the world of AI. Thank you for doing what you do. You are my number main ai news source 😁
Thanks nightcrawler
I adore how you break down all the nuances behind the curtain of development and academia works!!
Is the research work your primary job or there's some other projects beyond content creation?
Regarding the CIAs LLM:
I highly recommend watching the TV show "Person of Interest". It beautifully shows what an AI surveillance system could look like, both when done responsibly and when not done so.
While it's from 2012, it still holds up pretty well and brings up many concerns about AI safety as well as some solutions.
Looks cool
that show rocks! highly recommend
@Mark-zg4ky ok
I think that in order to have normal and constructive discussions about AGI, we should really agree on a definition... Top labs and researchers should agree on a definition and set clear benchmarks as well. I don't care if 99% at MMLU, brewing coffee, "can do most economically important tasks" + a way to calculate this. Or an agent who will be able to operate with $100k and make more.
Just make sure we have a definition. But something tells me that's not just going to happen...
And actually, since agreement on a single definition is probably impossible, maybe someone can make some carefully defined levels with snappy names. Then we could say AGI Type X and people would know what it meant.
Well done. Your channel will be a increasingly valuable resource as things develop.
Thanks Stephen, means a lot
I think the most important part about a altman phone as you call it (first of all it needs a better name tho) is for there to be some sort of assistant on how to safely browse the web or install apps. Like, when children are about to download some stuff it saying "hey this might be unsafe" or same for just giving passwords. Another feature you would absolutely need as long as this is true is to make the AI completely cut off from the internet. No telemetry. No hackability. Make it run in an isolated box that can only be acessed by software updates. If those are hijacked, the phone is unsafe anyway.
Happy to see you using Perplexity a bit more!
With all of these news coming out lately, each day i become more and more intrigued about Gemini. Exciting times
The thing that really blows my mind about stuff like Orca is, GPT-4 was released just 6 months ago. In only 6 months, we now have an open source model close in capability to GPT-4 that needs a fraction of the power and compute to operate it.
Combine that with modern prompting techniques and tech like AutoGen and it won't be long, maybe even as early as Q1 2024, before we have entire teams of efficient autonomous agents running locally on our smartphones, performing complex tasks that improve the quality of life of everyone in a significant way.
It's crazy how fast the field is advancing.
And then on the other end of things, the fact that GPT-4V was I think done training in early 2022. So who knows what OpenAI has cooked up behind the scenes to release.
next week: ACTUALLY big week FOR REAL
Week After: Shocked Pikacu Face thumbnail, Crazy News for Realsies
AGI did not originally mean "smarter than all humans"... The AI Effect is real: en.wikipedia.org/wiki/AI_effect
Of course not, read Superintelligence on release...
This humble sysadmin just got his AI-900 cert and is working on deploying AOAI services for half a dozen divisions in the company I work for. The demand is insane, our privacy/compliance departments are working overtime trying to slow the roll. It's not going to replace anyone here anytime soon (for certain definitions of "soon" as you pointed out), but I could easily see it start reducing new hires within the year. The managers aren't interested in the details, the performance metric they are citing is "man-hours."
Do email at aiexplained@outlook.com, would love to learn more and discuss some things.
I can imagine a web dev AI using tools, opening developer tools and clicking around and then using the results of that as feedback loop. or a browsinng AI, basically a lot closer to how we use tech
Second video of yours I watched, I subscribed after the first and commented and liked, this kind of dedication to good research is amazing to see! Keep up the good work :)
Thanks so much club
The nature of LLMs' "intelligence" is so mysterious to me. From this channel, I learned both that LLMs have a convincing ability to demonstrate Theory of Mind, yet they also can't reason that if A is B, then B is A. How can they simultaneously be so dazzlingly smart and yet have such massive blind spots? And: is the intelligence of today's LLMs intelligence necessarily a subset of the collective human intelligence, or is it possible it already has capabilities that we can't even measure for and that humans can't do? That is, some hybrid of sub-AGI and ASI at the same time?
This stuff gets deep! Hah. Phillip, keep up the amazing work on your channel. I watch every vid!!
Love this channel! Well done!
Thanks chris
ty for fast updates
0:37 GPT-V Web Design
1:42 Celebrity Chat Bots
3:04 State Actor Tools, Article
3:53 Altmann Phone - The iPhone for AI, Article
4:48 Autogen, Microsoft
7:47 Metaculus Ad
8:27 Mistral 7B
10:04 Microsoft Model Sizes, Article
11:21 GPT-Fathom Benchmarking, Paper
I did do timestamps too!
@@aiexplained-officialLol, thanks 🚀 didn‘t realize. I assumed it would show up in the player UI if the author added timestamps. Thanks so much 🙏🏻
Juicy. Thanks for the updates as always!
Thanks Michael!
Your work is amazing dude, congratulations on this incredible channel, wonderful vídeo WTF!🎉
Thanks Clax
I know we laugh off SkyNet memes, but I still haven't seen anything that shows we're NOT heading there.
SkyNet robots were pretty dumb tbh, ineffective
Perfect way to start the weekend! 🎉
great content as always :)
shame about the quality decrease about GPT4
The rate of improvements is astonishing. We're racing towards a very different future. I like the current stuff, but the implications of superhuman intelligence being accessible everywhere are not pretty.
Thanks! Love your videos. 🙏🏼
Thanks stephen
These videos are never disappointing.
Thanks D
I am four minutes into this video and this is already creating so many god damn "to-do" micro projects on my list just so I can say I actually use wtf I talk about lmao, INSANE pace of this advancement
The intuition I have on all this ai landscape for the future is that more and more ai application will use continuous feedback loops thanks to multimodality and shrinking of models and autogen-like architectures will become the norm. It seems like now using a single ai at a time to solve a problem is like asking a random dude to do something only a factory could
Love your videos so much man.
Thanks Lance
The Society of Mind a book by Marvin Minsky a pioneer in AI provides a theory of intelligence emerging from multiple agents working together similar to AutoGen
Do we know if GPT4 can compare images? Like I'd like to see if it can do a spot the difference for example.
Pretty sure yep
I beg to differ off the bat...
Your AI summaries regularly blow my mind.
YES ANOTHER VIDEO THANK YOU, dall e 3 is so good!
Available to almost all now!
AI Explained and a weekly episode of the Last Week In AI podcast keeps me up to date on everything I need to know. This saves me so much time trying to keep up to speed with the current state of the art.
Out of curiosity, what's your preferred podcast purveyor?
Thanks for another informative video.
It's not clear to me that an "Altman phone" would be less addictive than a normal phone. Surely it is not the screen that makes it addictive but the habit of being constantly in touch with it! Using non-visual senses to interface with it might if anything *increase* the user's emotional dependence on the phone.
True
Shout louder for those in the back!
Of course, that's just the usual lie-by-omission in the marketing.
Much appreciated. If you could share the autogen-math problem generator, that would be fire.
Insane content. Please keep doing it.
Thanks so much! Think it was this one:
Triangle p has sides of length x, y and z. If x = 5 and y = 8, what is the perimeter of triangle p?
(1) Triangle p is isosceles.
(2) z is a prime number less than 7.
A - (1) ALONE is sufficient, but (2) alone is not sufficient.
B - (2) ALONE is sufficient, but (1) alone is not sufficient.
C - TOGETHER are sufficient, but NEITHER ALONE is sufficient.
D - EACH ALONE is sufficient.
E - NEITHER ALONE NOR TOGETHER is the statements sufficient.
I was just thinking yesterday how cool it would be to use ai as the main way I use the internet and computers, instead of my phone or desktop where I just easily get distracted all the time
Awesone work as usual, thanks!!
Thanks santiago
4:26 This is definitely something that makes sense for phones. Touchscreens were always a bad idea by themselves, IMO, due to ergonomics and what they're optimized for, but two-way verbal communication is what I've wanted for a long time. I've tried to use voice assistants before, but they're all awful, but GPT-4 level LLMs make them actually useful and often pleasant to use.
Touchscreens were a good idea. What would you have wanted them to have instead?
A bad idea by themselves.
They're optimized for one thing: swiping. And what they mostly get used for is mindlessly swiping. The web has largely shifted to mobile-focused, yet phone touch screens are awful for getting any real, meaningful, useful work done on. They're awful for typing, but that's what they get used for, also leading to the further degradation of the written language.
And the actual ergonomics are bad, too, leading to bad posture and cricks in the neck, while being an easy pathway to mindless consumption.
So yeah, by themselves, touchscreens have been a terrible implementation. And yes, I'm using one right now, and I fucking hate it.
Just finished watching that podcast before I started watching this Phillip, cool. At around 10 minutes you explain cost as being a reason for these companies are, compartmentalizing or decreasing they're exposure in the off chance something goes awry. They're behaviour is appropriate for the current time-line, but I wouldn't put cash flow as a determining factor with these giants.Peace
My comment on the Lex Fridman podcast was that it looked like they were doing the podcast in a blanket fort.
Think I saw your comment on that vid or another one with Sutton
Tuning in months after and AI is still progressing rapidly. 😮
It’s like a revolution
Go Mistral! Loved their release tweet.
Great video!
Glad first 2 mins were great!
OpenAI moving into hardware is very interesting. But that really puts them behind all the big tech players. Still it will be interesting to see what Altman/Ive are planning on this front.
They are trying to make “her”
@@TheHeavenman88well I think Inflection is trying even harder to do Her with their Pi. It’ll be interesting to see how their model improves over time as well. They don’t get as much coverage but their cluster of 22k H100s is supposed to be fully built by Dec
This is getting big, I heard about autogen a few days ago and wanted to try it, but I can't get my api keys to work properly, claims I don't have plus when I've had plus since plus was available.
Amazing video as always Philip. I suspect that for GPT-5 might be as efficient as Orca. I also suspect that Microsoft will ensure that it runs on their new GPU chip (Athena). Overall, it would be amazing to integrate the new DALLE-3 into GPT-4.
Congrats to your prediction 🎉🎉🎉 I think you have more of them!
Thanks Birgitta!
Thank you so much ❤
Thanks Mars, yet again. Appreciated.
We seem to be nearing the point where, in the time it takes to describe one set of developments in AI another set of developments has arrived. Then it will be too rapid to keep up with.
The Visual Feedback loop is insane. Could it be the key for proper video generation?
i don't see how?
The same as people do it actually… the tricky thing would to get the vision model a reliable way to manipulate generation, simple prompting would not be efficient or precise
I think even the current video generators may have some aspect of this to them. To be able to have any kind of continuous changing output, it has to take previous frames into account. But I’m sure it’ll all get more sophisticated on multiple levels at once as we go on.
My guess is that the "Altman phone" will probably look and function like the devices shown in the film "her".
pure guess btw
Brilliant video as always. It's nice to see that Meta is pushing forward with AI commercial products as well. Mistral should go all in on huge uncensored models, they have nothing to lose. They can be used as a basis for really creative stuff. You can't do much art with these restricted models that are too limited, especially for creative writing and such. The censorship is too brutal.
I think censorship should be introduced in the training data I.e. make sure you don’t teach it how to make bombs or drugs
I take it back, you can learn to make bombs and drugs on the internet, no need for censorship!
1:49 definitely is something I need to cover for my channel.
Depending on how you look at it, AGI might come from the Deep Learning if we’re talking about a Frankenstein style AGI model, but if we are talking about that natural artificial general intelligence where it encompasses almost every single thing, a human being can do we need a feedback loop through training and inferencing everywhere and the best way I’ve seen this done is through reinforcement learning. Also we might need a few more innovations especially when it comes to long term memory. The current implementation can be regarded as hacks to keep the work going.
My vote is frankenstein
I can imagine different fine tuned GPT-4Vs, some good at reading, some at recognizing objects etc; like we fine tune text models. and those vision models then forming a MoE
Another great video 🙂
Thanks mckeed
Iterations on vision loops are going to be insane
There is also a difference between the performance of the Web version of ChatGPT and the apps on android and IOS.
I think the current models could be very efficient if they were able to create and manage objects on their own. In a test I did, ChatGPT and I played a game of cards, where Chat was managing the decks, and it was obvious that it could not a) create a representation of our distinct decks and b) make sure that these have unique cards that are stacked in a shuffled way, so that I came to the conclusion that one of the biggest design flaws is that these models cannot have a coherent understanding of the topic of conversation, as arbitrary as it might be. I understand that this is fundamental to security, but if these objects were created ostensibly, it could remedy much, in a field similar to the GPT-4 editor fields, just where the models could have initial writing capabilities.
What do you think?
AIExplained: "There is no real way to prevent state censorship using LLMs."
Also AIExplained: "Bro there's going to be an AI phone!"
😂
I know, can't help but be simultaneously in awe and trepidation
@@aiexplained-official I know how you feel, the pace and scope of this technology seems to grow larger and larger by the year. Hopefully it all works out for people in a positive way.
@@QuickM8teyit is crazy isn’t it? Every now and then I have real existential dread about where this is all heading and so fast. But then I spent a huge amount of time since yesterday playing around with DALL-E 3, and it is just so amazing and fun. We’re in for quite a ride for sure.
For the Orca & for smaller models call.... Congrat!, but never saw the cost factor though....