Cloud is like a hotel, it's great for a weekend, or if you suddenly have a bunch of unexpected guests, and of course if you can't afford a summer house in a geographically disparate areas, it's good if there is an earthquake. But if you live in a hotel you end up paying too much for a lot of stuff you don't need.
Cloud is like renting a room in an apartment instead of owning a home. If the plumbing breaks someone else is fixing on their timeline. Hotel as an analogy for infrastructure 1 more level up in the abstraction; better suited to the tradeoffs of pre-packaged services, al a carte, dedicated support, premium plans, etc.
@@oussama40612 Buying and setting up your own servers, like buying a house. Sure, it will be more expensive at first, but you have full control. It's no apartment.
you just mention it and we are all hands on deck troubleshooting this ec2 issue in us-east-2 Instance Connectivity Jun 30 10:46 AM PDT Power is now actively being restored and we are seeing recovery in connectivity to some instances and volumes. We are continuing to work towards full recovery. Jun 30 10:16 AM PDT We can confirm that a small number of instances have experienced a loss of power in a single Availability Zone in the US-EAST-2 (Ohio) Region (use2-az1). Some EBS volumes within the affected Availability Zone are experiencing degraded performance. All customers affected by instance connectivity and EBS impaired volumes were notified directly through the Personal Health Dashboard, beginning with an EC2 notification at 8:34 AM PDT. Engineering teams are actively working to restore power now.
You can run your apps on your own server and then hide it behind Cloudflare for the DDoS protection, if attacks are a problem. You do get another bill, but that's a bill for security, not for running your code.
I think there was a misunderstanding here. The ~25k is for legacy users. But that same server also hosts "Hey" and "Basecamp" which have millions of users. See the line "But the big move was HEY"
Because it's just a poorly written article , if you have an ambiguously named product you have to be explicit in the same sentence. But this article relies on a lot of all over the place vagary to justify it's conclusion. And no it's not a good sentence because it's on an website called hey and that it 's linked and then the next sentence says it's an application. Because what is "it" when you just heard a sentence ending with an interjection. It's web2.0 brand style writing that just ads to confusion by co-opting an everyday word.
Ahrefs wrote a blog about how they saved $400m over 3 years by taking their core infrastructure off the cloud. Their product wouldn't be profitable if it was all in the cloud.
I think they're a special case, as they need to do very extensive crawling and thereby need a lot of straight on compute and network IO (which both are pretty expensive in the cloud). Yes for some services it matters, but I'd say that's more of a one-off as long as you're really using the capabilities of the cloud (autoscaling to have the right capacities based on request volume, cloud-native services etc.) and taking into account cost of administration etc.
@@DragonRaider5 It's not the type of application, it's the size of it. Once you scale up to a certain point, any cloud service will have diminishing returns. I've seen 10's of thousands of dollars per month for a single service on AWS. Cloud makes sense for startups and hobbyists, but once you're making any decent money, it's time to rent your own hardware, then eventually buy it...
Haha this is the first time i can correct ThePrimeTime! The plough analogy was referring to the piece of equipment called a plough, and the fact that after using the plough you would put that piece of equipment in the barn afterwards and it would sit there doing nothing for the rest of the year!
My firm rents space in a data center with our own servers. We run all of our environments there e.g., development, testing, uat, staging. When we went "cloud" shopping, we were surprised at how much more expensive it would be than what we have now.
@@lukealadeen7836 no dum dum you just host your workloads in 2-3-4-5 geographically separated colos and either have active active or active passive strategy, yes it reflects on cost but it is very easy to outgrow your startup needs wwhere it make sense to self host
Same. I'm not looking for a way to build the fastest application, I'm looking for the fastest way to build the ideas I have, and ruby on rails works well for me, and it can scale well beyond any success I could dream of for any app that I build.
I once read a comment from a guy saying "No one cares if you have the fastest todo app". Why using a system language to do render a todo list in 1 instead of 2 milliseconds and in return being 5 times slower because you are need to take care of stuff like memory allocation. Its beyond my understanding.
Not sure why Rails gets so much hate. It is one of the fastest way to build amazing applications. There are a lot of optimisations you can make to ensure then end user has a great experience. Serverless is way slower (cold starts) but seems to be the rage these days. Once you reach scale (which 99% of companies never do) you can rewrite slows pieces of your app if you need to. DHH also gets a lot of heat, but in my opinion you need to be a little crazy to achieve great things!
@@kengreeffdon’t disagree at all, the right tool is the one that works for you. Serverless is useful where latency is not paramount & performs a small task- as the cost is $0/near $0 vs a server running 24/7 & sitting idle most of the time. That’s really the only benefit I see, cost.
Don't agree with you here. They said their demand is predictable, they can wait two weeks for more hardware. Plus the got shitload extra capacity with their current order. So they have ample time to respond to growing demand. Also I am surprised at your being surprised at the db being the bottleneck. What else would be the bottleneck. It's almost always the db. Stateless Compute is easy to scale.
It really boggles my mind that you think you need to be in the cloud to use Cloudflare's DDoS protection. Cloudflare just sits in front of traffic and forwards it to whatever address you tell it to, it doesn't matter whether that address is for an AWS API Gateway, an ECS instance, a bare metal server, or whatever else.
We had Cloudflare when on-prem. I've spent 30 years using on-prem servers and about 5 years using Azure. For on-prem I was hands-on, installing kit, repairing faults, as well as admin, development, etc. Azure is very expensive and you'll spend a lot of time and effort working out what you are using, how to save money, how to retool your apps to save money, avoiding doing stuff because everything costs money, worrying about whether you've left some super-expensive PaaS on, you will lose sleep as management will be questioning every expense. There's lots of cool stuff in the cloud, but I think it's a mistake to move everything there.
I see cloud Vs on-prem like cars; HP Vs PCP, the latter being cloud. All the dunces think PCP is good when it's shit and in reality, it's exactly made for dunces which think it's good when it's shit. Cloud was a new product for dunce CIOs to be sold old shit as new shit and they all bought....
I think you’re describing the sweet spot for a lot of use cases. Use a simple (possibility on prem) backend, and proxy that through a global cloud provider at the front.
" worrying about whether you've left some super-expensive PaaS on" This really should be a law that setting budget is required before use of the service. This whole we'll bill you later but wont tell you unless you know to/how jump through our baroque billing/notification system which is basically a separate service from the service you care about.
" you will lose sleep as management will be questioning every expense" adding to my other comment if management is questioning expenses that points to management needing to take responsibility of the FinOps(cost-management) by creating budgets with automation runbooks. So if your not the lead sys-admin you shouldn't be losing sleep over things you shouldn't have access to that are above your paygrade in the org; if you are a sys-admin responsible for budget costs then you should be being paid enough that losing sleep is a thing you look forward too.
Compute isn’t event my main issue, pricing wise, with AWS!! Bandwidth, egress charges, hourly charges for “attachments”, request charges, etc… Of a 20k bill, probably less than 10k are compute charges…
@@levifig "oh yeah, compute all you want on our servers, it's very cheap. What's that, you wanna move the results to another cloud? Oh, that's gonna cost you 😊"
Cloud can be insanely expensive for enterprise. I absolutely believe the cost savings on-prem point because of the numbers we've been running. The cute arguments for cloud i hear is that 'bUt we huv ta hire xperts.. and dats ard'. It's my opinion that for on-prem, you could have lavish amounts of paid server guys, devs, network engineers and still come out ahead in terms of cost. Relying on outsourced expertise has its problems... TAC being more terrible, thus more down time or prolonged tickets, and other maze like ticket games you have to play. Having on staff expertise gets faster resolution, products not floating on (potentially) poorly managed infra, and better continuity in your environment (paying staff not just for expertise, but for loyalty).
What does the physical infrastructure look like for a website with moderate traffic today? Curious since I've always loved the idea of running and maintaining my own servers.
A pattern that has helped me explain to companies cloud vs on-prem consumption is the cooking at home vs going out for dinner every day. In general, you'll spend a significant less amount of money if you manage your own cooking and avoid going out to eat every day. If you are not much of a cook or you do not like visiting markets for products then you'll come up with a lot of reason why going out , for lunch and dinner is superior, even when you will never convince any one that is in a budget, likes to cook or do not mind to cook
Cloud hype was build on the promise that it was new, modern and cheaper unlike the old IBM mainframe and you don't need a lot of people to operate, because the cloud was simple to use. And cloud providers made the UX experience so bad and hard to use that even IT people are complaining, because cloud providers can make more money with their dark patterns. The newest IBM mainframe Z16 has downtime of 3 s /year and it can cost $10M (the most expensive one) and it can handle 1T transactions / day The only thing that IBM mainframe can do is scale in hours on different continent. You know that the cloud is bad when they have certification on how to work with the cloud. Almost everything that cloud providers promised was false and that is why they are returning to more cheaper options.
Cloud advantages: low up front costs, rapid scaling (up and down), standard tools between different customers. Learning AWS kinesis or CDK or IAM is a pain - but if you move between employers, you can understand their AWS kinesis or CDK or IAM setups quickly. If you get a job at a company running its own hardware, you have to be onboarded by an existing senior systems engineer. But sure, if you have relatively fixed, relatively high computing resource needs then the cost savings from running your own hardware are colossal.
@@mikeswierczek If you treat your employers well and don't underpay them unless they change jobs, it doesn't really matter, because the up-front costs of onboarding a new hire is going to be dwarfed by the long years of work he or she will be putting in.
@@HrHaakon agreed. I'm speaking strictly of the case where you need rapid onboarding, or your company is small and you lose one of your key systems engineers to illness or a winning lottery ticket. If your company infrastructure is on AWS, Azure, GCP, etc... a new hire can figure everything out just from the web console.
@@mikeswierczek well, AWS services are just wrappers on top of already existing OSS tech Also there are private cloud solutions that you can implement on your own hardware. If you are small company and you want to sell to international market then the cloud make sense Let's say you based your company in Kazakhstan, but you want to sell products to USA and provide good service You can launch cloud instances in matter of minutes with no problem. Well establish companies that provide products in a country like banks and insurance companies don't need the cloud, because for their workload it's more expensive and less reliable then host your solution + to make it reliable you need to put the effort in software, which cost money. The other solution is buy IBM mainframe with 3s/year downtime It's expensive at purchase and you need people with IBM skills to maintain it The problem with the cloud is the cloud providers didn't deliver on the promise that is cheaper and easy to use. You need skill doesn't matter which one you use and many times own hardware solutions provides better UX experience, because they don't constrain your workflow, because they don't provide "best practice". On the other hand cloud providers have "best practice" and a simple thing like create new db instance from a backup is pain in AWS. And these "best practice" have vocabulary that you have to learn. Why ? Because cloud providers use dark patterns to make more money
You can divide the services and more control but it all depends on your cases. Plus Ruby yea we can complain about it but it easy said than done to switch. It’s not just about the tech if it’s makes business sense which in this world always conflicts. Business vs tech and it’s the reason why adoption take a long time to get fully implemented.
The name is: I'm not sure what the exact answer should be but you should be able to individually compare it for yourself and make sure that you have enough users and spend to be able to try out each things agen.
The ruby hate is hilarious. If you look at Ruby 3 there's actually considerable performance improvements with yjit. Who knows if they're using it but if they are it is actually respectable.
I think a lot depends on other factors as well. For example, at work we are doing digital forensics and some law firms can be very demanding in wanting to know who can have access to what resources in your infra, down to who can access the database directly. With a big cloud provider you cant really guarantee that there is no way a malicious employee could ever have access to that stuff. But I don't think there are that many companies that need such extreme guarantees. However, it is kind of unfortunate that we have new devs that have never configured a server and deployed an app on it themselves. You definitely learn a lot doing that.
I know a study which had shown that businesses doesn't get monetary benefits from being in the cloud. I think in the early stages its okay into the cloud also its a must when you on a hype train but there is also when you grow out of it. Basically you have 4 stage from where in 2 stages cloud is okay and only in hype train phase is it when its needed at least so far the theory. In general I think your own server became more viable because of the end of Moors Law. Unlike the past you get top performance from your server many years (at least when you aren't in the AI stuff ). In general its something which need to be decided on case to case. In this case I think it makes sense because of the stable user base from the legacy products.
It only makes sense to move to the cloud if you can fire 90% of your infrastructure team. Unfortunately, it seems like your infrastructure team now needs to learn server hardware, networking, and now new management tools to do their job. The only part of the job that you’ve eliminated is the physical unboxing and installation of servers, which is the cheapest part of the job.
And a great place to get people in on the ground floor if you want to hire people who may come from non-traditional backgrounds. But hey, diversity is only skin-colour deep in corporate america.
Rewrote my Ruby service to Rust, from 5 ms got down to 1.5 ms p90 response time. Not a big deal, especially given that it took 20 times more time to write this code in Rust.
Maybe it would be more interesting, and specially almost as fast by rewritting it in C#. I assume that, depending of the complexity of your service, it would had cost 3-5 times instead of 20 times, but I think you should consider if you had already proficient with Rust before writting, you can't just expect to take a completely new and fundamentally different language than you're comfortable with, and develop it as fast as with your primary language.
Amazon reduced their own cost by 90% of operating prime video monitoring by moving their AWS micro service architecture to a monolithic architecture, running on EC2 (which means the stack could also easily run in data centers with dedicated machines). They published an article about it you should look it up!
The funny thing about serverless is that it's pretty much what we did pre-2000, with a bunch of PHP and Perl code being called through CGI. And then VMs came and we were like: wow, now I can run the modules and versions I want, and not what the ISP has!
Rewrote my Ruby service to Rust, from 5 ms got down to 1.5 ms p90 response time. Not a big deal, especially given that it takes 20 time more write this code.
@10:20 The whole "when you build your company around a product someone else offers it makes me hesitant" thing is funny to me in the serverless conversation, because the companies I have worked for are built around a WHOLE lot more than just AWS lambda. E.g. they've been fundamentally dependent on top of services like Salesforce, financial service providers, etc etc etc. Not to mention code that is dependent on all kinds of libraries and frameworks that rot and go sour all the time.
@@CottidaeSEAsalesforce developer here. Company got sold on salesforce. It’ll be up and running in 6 months they said… 3 years later, wishing we rewrote the legacy vb6 app with a modern language. Would have had it done in a year.
You can have cloudflare both http proxy, and scrub traffic, to your on prem networks. We've been doing it for years. If we shoved everything to the cloud we wouldn't be able to afford the compute, network, and storage, that our QA testing openstack cluster users. Much of the rest of it could be cloud hosted, but I wouldn't want to be only cloud. There have been at least a doze incidents of various cloud providers having hours of outages over the past several years where we were entirely un-affected because all core infra is on prem.
Here's the bit not mentioned in the article. MRSK is a home-grown deployment framework that they are positioning as a competitor to things like Capistrano, Hashicorp Nomad, etc. Or maybe K3s or some other "K8s-lite" variant. How much does it to develop and maintain a container orchestrator DHH? That by itself could make up the cost difference. :/
the core of efficiency in on-premise comes when you dont have to worry about cloud cost optimization, cloud security, building a ton of tools to manage cloud resources, cloud resource attribution. security is much more relaxed when you run stuff in your own data center. like for example, you don't need last-mile security from the API GW to your ec2 worker node. small things like that keep adding up.
correct me if this is not correct, but I think you can still leverage the ddos prevention capability of a service like cloudflare, and have dns route to your own data centers. I dunno, maybe I'm wrong.
Monolithic server architecture woeks fine for like 99% of stuff. Sure when yiure getting 5 hits a minth, the $5 VPS might be cheaper. But as you grow, that vps will go further than all these microservices. It also makes software so complex when it often never has to be.
It seems like an eminently reasonable strategy to me to start in the cloud, use all the free pricing models that are available, keep your application containerized/dockerized/k8'ized/serverless or however else cloud-native you want. Enjoy the free tiers and auto-scaling while you are small and building your business, and then if you're in a SaaS model where you business has settled down into to a predictable level of service, and you want to migrate out of the cloud, it's just a matter of plopping that same cloud-native stack onto your own hardware.
I suppose I need to test this statement, but in my experience most Ruby services are bottlenecked by the database. (A huge part of Basecamp's AWS bill was RDS I'm sure.) They are just a front-end for shoveling data in and out and rendering a template. Ruby is also not uniquely slow compared to similar languages that are still "hip", like Python, and a lot of good work has been done to speed it up. Most Ruby services ever written are probably running happily on a single not very big instance serving a few thousand users at most. I prefer not to write it anymore because I am pretty much over any language without a good type system, but the misconceptions are irritating. DHH, although his writing style says "high school student", is right on the money about cloud and on-prem, having done both. On-prem is cheaper for the same hardware, and not really any more difficult to manage at large scale, but you lose flexibility and need a different skill set. Cloud seems simpler at first, but rapidly becomes not simple, just more flexible.
Not just ruby. Most of web services are bottle-necked by IO wait. Your language does not matter when your code is waiting for an sql query not hitting am index. Execution wise. Pretty sure ruby and python compute at the same level, wait ruby has trouble though is that it’s memory heavy, not an issue for most services, but if you do anything the]at necessitate a shit tons of object memory allocation (anything data science related) that’s where ruby takes a huge hit. At least last time I checked.
I built an on-prem Kubernetes cluster before, it was for an air-gapped lab in order to run an application that would combine data from a bunch of internal systems. The hardware was very nice at the time, and the server and service plan from HPE ran to about $50k. A license to run our software in their lab was something like $250k a year.
Yup, kubernetes is the way to go. Don't buld for proprietary cloud. Build for Kubernetes and run on commodity servers. On Prem, Owned or Leased Data Center, Managed Kubernetes on cloud or self managed Kubernetes on VMs obtained from cloud
That ending...beautiful 😢 With that said, I was talking to someone who was running a small company and hosting everything on-premesis. They're mainly business facing and host internal tools and such, and he swore by it. I know this doesn't count for much but I'm thinking maybe there's merit here if you know your load isn't going to be enough that you have to think about scaling globally to thousands/millions of requests per second?
Say what you want but the focus on privacy makes me like this company very much. Hey is a very interesting idea I've been using for two plus years now,very happy :D
That's why I love Go with serverless. As long as the provider satisfies the normal http.Handler interface (like Vercel, GCP, etc) you could easily move to another provider. You can even build the subrouters inside of your function because you have the full request object.
@@rochakgupta6116 it has nothing to do with the application design. It's the fact the standard library defines that interface. For example, with a Python lambda on AWS, you get a context and a JSON payload. If you move to another provider, you now have to deserialize to that new model. With Go you get the http.ResponseWriter and *http.Request objects directly. Now you can use anything that satisfies http.ServeMux as a router. This means you can use the same routing logic in a regular application vs a serverless function and never need to have different functions to check HTTP methods, path variables, etc depending on whether it's running as a function, as an app, running on AWS, running on GCP, etc. It can be the same everywhere.
Wait till their disks start failing in a couple of years, weird hypervisor bugs pop up, abuse (and an e-mail service has a lot of that) in one container creates problems in other containers, the CPU/RAM-ratio of the hypervisors is way off compared to the workload of the containers causing wasted and unused resources when either the RAM or CPU of the hypervisor is fully in use, the new scala of possible networking issues due to running your own switches and routers. The extra responsibility you get compared to running software in "the cloud" is always bigger than you'd expect and I'm afraid the cost reduction of running your own servers is mostly short-term.
Tom is a genious. Jsdl is very powerful. Jsdl is written in jsdl. It means you dont need a cloud server or an on premise server. It runs itself. In the future, servers wont run jdsl, jdsl will run servers
Wish more companies would go lighter on cloud services. Especially many of the ML services are dysmal and often prove inferior to something you could just do yourself with a few packages.
What, you know this?! On-premises & Cloud hardware, features, networking etc are 100% the same and they can still use cloudflare just like any firewall. There only difference is management and ownership of them. Cloud is On-premises but not soley managed, and owned by you.
I was psyched to see OpenFAAS come up in this conversation. Serverless doesn't have to be expensive. You just have to know how it works and set it up yourself if you want to save money with it. That said, still working on figuring out OpenFAAS...
@@chris-pee lol I can't EASILY do anything with OpenFAAS at the moment, but that seems to be the main point of it. Haven't actually played with it for awhile.
a few months ago i was chatting with my ex-IT-departed lead and he was arguing it's better to have your own servers (and if it was up to him Windows-Servers, even if we didn't have any microsoft ecosystem apps, just because he didn't like to read what to do in Linux), I was arguing that for new-er project (even if they were rare) we should Digital Ocean droplets so that we didn't have to wait 1-4 weeks to get servers, especially since i was the guy that after i got the connection data, had to install everything on that machine because the guy that was paid to do it, the IT guy, was refusing to learn(80% of the time i was just copy pasting into the terminal from digital ocean docs) And in this discussion he was telling me that he currently has to buy more cores and more RAM for the server because the JAVA devs were too lazy to do some refactor and stop waiting memory, not to mention that the database was using a IBM DB2 ecosystem where nobody that was working in it was actually capable of configuring/optimizing it (and this i found out when i told them to start doing load balancing replication, at least for read/writes because they were constantly hitting the max connections threshold and transitions, even for small things, were taking forever) As a QUESTION: was i wrong when i told him my sugestions?
Going raw-dog from the start has its merits, too: old $500 desktop with embedded graphics will carry you through demos to funding, good deal on 100kg of iron nearing end of life (replaced or shutdown within a year anyway) will not allow turning a blind eye to such important aspects as monitoring, capacity planning, recovery from hardware failures, data persistence, failure modes, stress testing, etc. No button to upgrade rds instance in 2 minutes to fall back asleep in 5 may bring attention to the schema before everyone forgets which features use what tables, how and why. Maybe some stuff will end up in hbase and other in hive/snowflake/whathaveya with pg binlog continuously imported into. Not having luxury of single day of frantic plowing once in 4 months, you’ll figure healthier plowing schedule, quick Thursday plow and another on Saturday afternoon, without urge to fix all problems accumulated over 4 months asap bc hourly bill is the same whether ur plowing or not. Forced to hire infra guys with longer beards, you get exposed to their poetic feedback on your architecture, choice of tools and growth strategies; such a treasure. And if grim reaper comes, hardware will go way before the desks, chairs or coffee machines, with less of a haircut for amortisation. Frequently halting creative flow of features to deal with infra sucks, but unlearning 1-click fixes for anything infra is no cake either.
"stop writing ruby" facts. not only that rails is the slowest web framework in ruby as well. roda+sequel is 10 times faster than rails but DHH loves his own creation. All those apps are written in slow rails
The hard part of creating a Saas company is to make it become successful, not making the code run fast. Shorter developer iterations will help you get there. If you're working for BigCo with an already defined business model where economies of scale apply, sure a language that focuses on making your server run faster will make a difference. For all the medium to small companies out there though, time to market is way more important than time to compute.
A friend of mine works for a company that is half way through doing this, and now their ops team is bigger than their dev team and theyre already considering abandoning the plan altogether. AWS seems expensive, until you start managing everything (including the hardware) yourself. You can definitely end up spending alot with AWS (especially if you actually listen to their solutions architects), but going fully self managed is a pretty drastic solution.
The author has posted an article called "Why we're leaving the cloud" back in October 19, 2022, which goes a lot more into the technical details and explains a lot better why the decision really made sense.
08:15 I think you might be the only person who can strengthen their argument by waving hands around and saying, "You know what I mean". I know nothing about Ruby, but I know what you mean.
These are legacy services. Are they really going to scale up rapidly? It seems like a 2 week lead time on needing better hardware is fine. I suspect they will never need new hardware unless they have really bad programmers.
@@ThePrimeTimeagen - That's true, and that can happen in a way that''s a surprise. Hopefully you've engineered for the necessary redundancy to deal with this. If you have, it seems like a two week lead time on replacement hardware represents an acceptable risk in having a sequence of failures that exceed your redundancy planning. You have a lot more experience with that level of thing though than I do I suspect.
@@invinciblemode - I worked for ServiceNow in Devops for a year. They have their own cloud. They have a datacenter in every region they service. But they were wanting to go into markets where they didn't have datacenters yet without the cost of setting one up until the market was large enough. They contracted through (of all the stupid places) Azure. But, they were very committed to their own cloud and their own datacenters. Partly because it enabled them to control compliance with regards to data location. But also because it was a lot cheaper for them. But, ServiceNow is an interesting case. They started before AWS and the like were a thing. And they're big enough to have significant economies of scale.
The place where I work recently moved to cloud due to scale issues with in house data centres not able take the load. But now the cost is so high they’re taking measures to reduce load on data warehouse. We’re into Analytics processing more than 4 trillion records per year for our customers before moving to cloud, now it’s even more. I really don’t know which option is better but till the time you are able provide optimal service and you are not underwater whatever suits your needs is best.
RE the Plough confusion... What the guy said actually makes sense if you read it. He was referring to the machine itself in his analogy i.e. if you only need a Plough machine 3 times a year, why buy one and store it in a barn most of the year, rent one 3 times a year instead. I obviously have no idea about farming and ploughing and plough machines, so whether that is actually sound advice for the real world of farming I have no clue.
3:38 "How does it take this much compute power to run some legacy users?". The paragraph above says "But the big move was Hey...". That compute power is for all their apps, not just legacy stuff. Which totally changes the value proposition.
My father was complaining that his CIO wants to migrate to AWS because costs will be high both in the short term and the long term, but apparently, all these people care about is sounding cool to investors. By the way, u and v used to be considered two variants of the same letter, hence the odd naming of w, which used to be written as uu. Like c, this letter would take on slightly different pronunciations in different contexts due to Latin sound changes.
we should treat slow languages as we treat crypto, Electricity and resource sucker, I wish there were Energy tools for x86_64 like arm has which tells energy usage which writing programs
the cloud king came down from his thrown to save all the lost repos whose original sin was only that they existed not in the cloud. he took on all the git blame, and merged all who accepted trunk based cicd into his master repo and now they dwell on the non-cloud for eternity.
I really enjoy your videos and there are areas I feel like an idiot watching your videos. Then watching this I realise how you know so much in an area then know enough in another area to think you understand when you don’t. Still a good video
Well "we need" it and a "couple weeks away" is something very familiar to anyone who does software development. I get why it something hard to come to terms with when it comes to scaling and reliability though :D. I think if your business is very predictable you can still benefit from an on-prem solution seeing how much it costs to run enterprise infra at scale in the cloud.
I used to work for GCP. I think clouds are a great service for rapid scaling, build/dev work, disaster recovery, a few other cases. I also think that once things settle down, your stable system should be run "on prem".
cloud is for loads that are highly volatile. if your business has huge load peaks and your architecture can scale then you can save lot of money by running it on a cloud. the whole idea of aws started from amazon noticing that their own data center servers run 70-90% idle and how to rent that idle capacity to outsiders. the same will easily happen to anyone running their service in own datacenter if the load varies a lot.
0:45 They brought heritage services off of the cloud. Services which they aren't selling anymore, but providing service for to existing customers. This means that they aren't going to grow, ever. And if hardware requirements increase, they will know, well in advance, just by looking at the numbers of how much of their existing hardware is being used and drawing a trend line. This is why they moved away from the cloud. There is no point in being prepared for a sudden spike in customers and thus need for more hardware, when you are not selling.
I imagine this guy was too young to see how complicated things were before cloud ? spending nights in DCs, power outages , slow scaling , hardware issues , hardware technicians coming on-site to fix while stuff is down, storage and link packups , power backups ? All this needs to be done by hand not through a comfy api . He'll be back in the cloud in no time.
When load is predictable, get off the cloud. Cloud is great for startup and massive growth but then once you level off and can be predictable about things (and dont have ticketmaster type spikes)... get off the cloud. Seems logical to me, but not for everyone.
On-Prem will affect businesses that are regulated and required to store their data within their locality. I am sure there are other uses cases but this is usually the primary use case that I have encountered.
Boss: What're these costs? IT: In-housing some operating expenses while saving 100-400 million in 3 years. Boss: *begins spasming on the ground* Get, GET OUTA HERE AND DO IT! Roko's Acolyte: The Basilisk will spare your worm life when she is born.
the 25k users is talking about the legacy products, he then moves on talk talk about HEY their latest product and thats when he starts talking about the new hardware. You have misunderstood / misread.
3:15 tens of thousands using the legacy apps that 37 signals are committed to maintaining even tho they don’t actively market or sell those products anymore. As for products they currently sell, basecamp alone has more than 15m users.
7:18 the cheapest cloud vendor that could meet my company's baseline performance and user experience requirements were 16 times more expensive than co-locating hardware and spending a few hours a month for the same end user experience. The cloud solutions we tested have had 10x the amount of downtime and time with unacceptable levels of performance compared to the colo servers in the last 6 years. Most companies that are of similar scale spend more time managing their cloud Infra than we do for our dedicated infra.
1. Just contract with Cloudflare to front the system to take care of CDN needs as well as your favorite DDoS. 2. Do some capacity planning during your monthly tracking report… that will give you more than a few weeks heads up for new HW needs.
Depends on type of task they are doing, replacing ruby with something like go might give you small percentage of gain, if all their computation lais outside of language.
Obviously they can invest money and replace ruby with go, it is easly said. this takes commitment, so overall i don't get why he is focusing on this point. One thing is certain tho, there is value in avoiding cloud at some point if you are spending 3.5 milion per year on AWS maybe it is time to have your own datacenters.
FWIW my understanding is that not only did they bring home the legacy apps, also their main businesses of HEY! And Basecamp, which server far more than 25,000 users.
One take away I see. 37signals is also most likely lean enough that the purchase process for new servers is not the red tape hell that exists at big corporations.
I just want a federated open source PAAS abstraction layer. Take all self hosting companies and let them run their stuff on each other’s hardware through this abstraction layer that becomes easier to hire devs for because it’s a standard. It would protect against DDOS just as much as AWS if enough self hosting companies joined in.
Cloud is like a hotel, it's great for a weekend, or if you suddenly have a bunch of unexpected guests, and of course if you can't afford a summer house in a geographically disparate areas, it's good if there is an earthquake. But if you live in a hotel you end up paying too much for a lot of stuff you don't need.
Big Brain comment. Props for this
So what's the alternative
Cloud is like renting a room in an apartment instead of owning a home. If the plumbing breaks someone else is fixing on their timeline.
Hotel as an analogy for infrastructure 1 more level up in the abstraction; better suited to the tradeoffs of pre-packaged services, al a carte, dedicated support, premium plans, etc.
True. And if you stay long-term in a hotel, you can get deep discounts. Otherwise, you pay for however many days you stay.
@@oussama40612 Buying and setting up your own servers, like buying a house. Sure, it will be more expensive at first, but you have full control. It's no apartment.
Business args aside, it always creeps me out knowing how much of the web runs on AWS + other clouds
It's scary to think about
i am definitely on this team and i think the world probably would be a better place with less AWS
At this point the only decentralised part of the internet are the users. And decentralising users isn't the empowering part of decentralisation.
@@ThePrimeTimeagenand without azure 😂
you just mention it and we are all hands on deck troubleshooting this ec2 issue in us-east-2
Instance Connectivity
Jun 30 10:46 AM PDT Power is now actively being restored and we are seeing recovery in connectivity to some instances and volumes. We are continuing to work towards full recovery.
Jun 30 10:16 AM PDT We can confirm that a small number of instances have experienced a loss of power in a single Availability Zone in the US-EAST-2 (Ohio) Region (use2-az1). Some EBS volumes within the affected Availability Zone are experiencing degraded performance. All customers affected by instance connectivity and EBS impaired volumes were notified directly through the Personal Health Dashboard, beginning with an EC2 notification at 8:34 AM PDT. Engineering teams are actively working to restore power now.
You can run your apps on your own server and then hide it behind Cloudflare for the DDoS protection, if attacks are a problem. You do get another bill, but that's a bill for security, not for running your code.
aws waf\shield advanced is also not cheap so everything has its cost
Recently CloudFlare gave in to some DMCA notices and divulged the backend IP of a website. If someone can get the backend IP they can DDoS.
Reverse proxies and load balancing should do the trick though, just can be a real pain in the ass to engineer correctly at scale.
@@jepemz873 Folks who issue DMCA notices don't generally DDoS. Also not difficult to cycle the IP periodically.
@@lhxperimental Super true! Or be based and round robin like a paranoid lunatic 😅
I think there was a misunderstanding here. The ~25k is for legacy users. But that same server also hosts "Hey" and "Basecamp" which have millions of users. See the line "But the big move was HEY"
That frustrated me so much. There even were people who pointed it out in chat, but he just refused to understand what they said.
@@megaing1322 Yeah, it often seems like he's just trying to get finished as fast as possible and ends up being sloppy.
Wooordd.
@@sirhenrystalwart8303 right I've noticed too!
Because it's just a poorly written article , if you have an ambiguously named product you have to be explicit in the same sentence. But this article relies on a lot of all over the place vagary to justify it's conclusion.
And no it's not a good sentence because it's on an website called hey and that it 's linked and then the next sentence says it's an application. Because what is "it" when you just heard a sentence ending with an interjection.
It's web2.0 brand style writing that just ads to confusion by co-opting an everyday word.
Ahrefs wrote a blog about how they saved $400m over 3 years by taking their core infrastructure off the cloud. Their product wouldn't be profitable if it was all in the cloud.
I think they're a special case, as they need to do very extensive crawling and thereby need a lot of straight on compute and network IO (which both are pretty expensive in the cloud). Yes for some services it matters, but I'd say that's more of a one-off as long as you're really using the capabilities of the cloud (autoscaling to have the right capacities based on request volume, cloud-native services etc.) and taking into account cost of administration etc.
@@DragonRaider5 It's not the type of application, it's the size of it. Once you scale up to a certain point, any cloud service will have diminishing returns. I've seen 10's of thousands of dollars per month for a single service on AWS. Cloud makes sense for startups and hobbyists, but once you're making any decent money, it's time to rent your own hardware, then eventually buy it...
Haha this is the first time i can correct ThePrimeTime! The plough analogy was referring to the piece of equipment called a plough, and the fact that after using the plough you would put that piece of equipment in the barn afterwards and it would sit there doing nothing for the rest of the year!
My firm rents space in a data center with our own servers. We run all of our environments there e.g., development, testing, uat, staging. When we went "cloud" shopping, we were surprised at how much more expensive it would be than what we have now.
What happens when there's a fire or hurricane or the company hosting your servers goes under?
@@lukealadeen7836 That's why you need backups.
@@VivekYadav-ds8oz so you have to pay double the price for the backups hmmm?
@@lukealadeen7836 no dum dum you just host your workloads in 2-3-4-5 geographically separated colos and either have active active or active passive strategy, yes it reflects on cost but it is very easy to outgrow your startup needs wwhere it make sense to self host
@@lukealadeen7836 What happens when you get hit by a bus?
😢 I like working with Ruby.
Same. I'm not looking for a way to build the fastest application, I'm looking for the fastest way to build the ideas I have, and ruby on rails works well for me, and it can scale well beyond any success I could dream of for any app that I build.
I once read a comment from a guy saying "No one cares if you have the fastest todo app". Why using a system language to do render a todo list in 1 instead of 2 milliseconds and in return being 5 times slower because you are need to take care of stuff like memory allocation. Its beyond my understanding.
Not sure why Rails gets so much hate. It is one of the fastest way to build amazing applications. There are a lot of optimisations you can make to ensure then end user has a great experience. Serverless is way slower (cold starts) but seems to be the rage these days. Once you reach scale (which 99% of companies never do) you can rewrite slows pieces of your app if you need to. DHH also gets a lot of heat, but in my opinion you need to be a little crazy to achieve great things!
What language does primeagen not dunk on though (apart from rust)
@@kengreeffdon’t disagree at all, the right tool is the one that works for you.
Serverless is useful where latency is not paramount & performs a small task- as the cost is $0/near $0 vs a server running 24/7 & sitting idle most of the time. That’s really the only benefit I see, cost.
Don't agree with you here. They said their demand is predictable, they can wait two weeks for more hardware. Plus the got shitload extra capacity with their current order. So they have ample time to respond to growing demand. Also I am surprised at your being surprised at the db being the bottleneck. What else would be the bottleneck. It's almost always the db. Stateless Compute is easy to scale.
It really boggles my mind that you think you need to be in the cloud to use Cloudflare's DDoS protection. Cloudflare just sits in front of traffic and forwards it to whatever address you tell it to, it doesn't matter whether that address is for an AWS API Gateway, an ECS instance, a bare metal server, or whatever else.
Nevermind he is famous not for knowledge but for being entertaining in a wierd way.
Can't you address this with hosting DNS on cloudflare? My self hosted stuff has DDoS protection through it.
@@burneternally address what?
We had Cloudflare when on-prem. I've spent 30 years using on-prem servers and about 5 years using Azure. For on-prem I was hands-on, installing kit, repairing faults, as well as admin, development, etc. Azure is very expensive and you'll spend a lot of time and effort working out what you are using, how to save money, how to retool your apps to save money, avoiding doing stuff because everything costs money, worrying about whether you've left some super-expensive PaaS on, you will lose sleep as management will be questioning every expense. There's lots of cool stuff in the cloud, but I think it's a mistake to move everything there.
I see cloud Vs on-prem like cars; HP Vs PCP, the latter being cloud. All the dunces think PCP is good when it's shit and in reality, it's exactly made for dunces which think it's good when it's shit. Cloud was a new product for dunce CIOs to be sold old shit as new shit and they all bought....
I think you’re describing the sweet spot for a lot of use cases. Use a simple (possibility on prem) backend, and proxy that through a global cloud provider at the front.
@@clickrush you sound like a dev; no knowledge of infrastructure.
" worrying about whether you've left some super-expensive PaaS on"
This really should be a law that setting budget is required before use of the service.
This whole we'll bill you later but wont tell you unless you know to/how jump through our baroque billing/notification system which is basically a separate service from the service you care about.
" you will lose sleep as management will be questioning every expense"
adding to my other comment if management is questioning expenses that points to management needing to take responsibility of the FinOps(cost-management) by creating budgets with automation runbooks.
So if your not the lead sys-admin you shouldn't be losing sleep over things you shouldn't have access to that are above your paygrade in the org; if you are a sys-admin responsible for budget costs then you should be being paid enough that losing sleep is a thing you look forward too.
How does this Prime become the Prime giving DHH a standing ovation in that latest interview?
Prime's thinking evolved, theo is next!
I'm happy to see more decentralization of compute. If running off cloud gains enough traction it should force AWS, GCP etc to be more competitive.
Or at least get them to stop being assholes
Gcp is already much cheaper than aws. To the point Im not trusting them they would not turn everything off someday.
Compute isn’t event my main issue, pricing wise, with AWS!! Bandwidth, egress charges, hourly charges for “attachments”, request charges, etc… Of a 20k bill, probably less than 10k are compute charges…
@@levifig "oh yeah, compute all you want on our servers, it's very cheap. What's that, you wanna move the results to another cloud? Oh, that's gonna cost you 😊"
@@LtdJorge Simply moving data between AZs is going to cost you...
Cloud can be insanely expensive for enterprise. I absolutely believe the cost savings on-prem point because of the numbers we've been running. The cute arguments for cloud i hear is that 'bUt we huv ta hire xperts.. and dats ard'. It's my opinion that for on-prem, you could have lavish amounts of paid server guys, devs, network engineers and still come out ahead in terms of cost. Relying on outsourced expertise has its problems... TAC being more terrible, thus more down time or prolonged tickets, and other maze like ticket games you have to play. Having on staff expertise gets faster resolution, products not floating on (potentially) poorly managed infra, and better continuity in your environment (paying staff not just for expertise, but for loyalty).
What does the physical infrastructure look like for a website with moderate traffic today? Curious since I've always loved the idea of running and maintaining my own servers.
A pattern that has helped me explain to companies cloud vs on-prem consumption is the cooking at home vs going out for dinner every day. In general, you'll spend a significant less amount of money if you manage your own cooking and avoid going out to eat every day. If you are not much of a cook or you do not like visiting markets for products then you'll come up with a lot of reason why going out , for lunch and dinner is superior, even when you will never convince any one that is in a budget, likes to cook or do not mind to cook
Cloud hype was build on the promise that it was new, modern and cheaper unlike the old IBM mainframe and you don't need a lot of people to operate, because the cloud was simple to use.
And cloud providers made the UX experience so bad and hard to use that even IT people are complaining, because cloud providers can make more money with their dark patterns.
The newest IBM mainframe Z16 has downtime of 3 s /year and it can cost $10M (the most expensive one) and it can handle 1T transactions / day
The only thing that IBM mainframe can do is scale in hours on different continent.
You know that the cloud is bad when they have certification on how to work with the cloud.
Almost everything that cloud providers promised was false and that is why they are returning to more cheaper options.
Cloud advantages: low up front costs, rapid scaling (up and down), standard tools between different customers. Learning AWS kinesis or CDK or IAM is a pain - but if you move between employers, you can understand their AWS kinesis or CDK or IAM setups quickly. If you get a job at a company running its own hardware, you have to be onboarded by an existing senior systems engineer.
But sure, if you have relatively fixed, relatively high computing resource needs then the cost savings from running your own hardware are colossal.
@@mikeswierczek
If you treat your employers well and don't underpay them unless they change jobs, it doesn't really matter, because the up-front costs of onboarding a new hire is going to be dwarfed by the long years of work he or she will be putting in.
@@HrHaakon agreed. I'm speaking strictly of the case where you need rapid onboarding, or your company is small and you lose one of your key systems engineers to illness or a winning lottery ticket. If your company infrastructure is on AWS, Azure, GCP, etc... a new hire can figure everything out just from the web console.
@@mikeswierczek
Yeah, that sounds reasonable, especially since you at that stage wouldn't have very exotic architecture.
@@mikeswierczek well, AWS services are just wrappers on top of already existing OSS tech
Also there are private cloud solutions that you can implement on your own hardware.
If you are small company and you want to sell to international market then the cloud make sense
Let's say you based your company in Kazakhstan, but you want to sell products to USA and provide good service
You can launch cloud instances in matter of minutes with no problem.
Well establish companies that provide products in a country like banks and insurance companies don't need the cloud, because for their workload it's more expensive and less reliable then host your solution
+ to make it reliable you need to put the effort in software, which cost money.
The other solution is buy IBM mainframe with 3s/year downtime
It's expensive at purchase and you need people with IBM skills to maintain it
The problem with the cloud is the cloud providers didn't deliver on the promise that is cheaper and easy to use.
You need skill doesn't matter which one you use and many times own hardware solutions provides better UX experience, because they don't constrain your workflow, because they don't provide "best practice".
On the other hand cloud providers have "best practice" and a simple thing like create new db instance from a backup is pain in AWS.
And these "best practice" have vocabulary that you have to learn.
Why ?
Because cloud providers use dark patterns to make more money
When your compute becomes quite predictable, this is the point where this discussion actually becomes really valuable.
You can divide the services and more control but it all depends on your cases. Plus Ruby yea we can complain about it but it easy said than done to switch. It’s not just about the tech if it’s makes business sense which in this world always conflicts. Business vs tech and it’s the reason why adoption take a long time to get fully implemented.
The name is: I'm not sure what the exact answer should be but you should be able to individually
compare it for yourself and make sure that you have enough users and spend to be able to try out each things agen.
"What is this [(192 thread Dell R7625s)] thing doing?"
Mostly compensating for the performance of Ruby, probably
the soy self-hosted ruby dev vs the chad typescript cloud dev
Who is the soy drinker now
The true chad:
Self hosted on pentium laptop serving 25k users
The ruby hate is hilarious. If you look at Ruby 3 there's actually considerable performance improvements with yjit. Who knows if they're using it but if they are it is actually respectable.
I think a lot depends on other factors as well. For example, at work we are doing digital forensics and some law firms can be very demanding in wanting to know who can have access to what resources in your infra, down to who can access the database directly. With a big cloud provider you cant really guarantee that there is no way a malicious employee could ever have access to that stuff. But I don't think there are that many companies that need such extreme guarantees.
However, it is kind of unfortunate that we have new devs that have never configured a server and deployed an app on it themselves. You definitely learn a lot doing that.
I know a study which had shown that businesses doesn't get monetary benefits from being in the cloud. I think in the early stages its okay into the cloud also its a must when you on a hype train but there is also when you grow out of it. Basically you have 4 stage from where in 2 stages cloud is okay and only in hype train phase is it when its needed at least so far the theory.
In general I think your own server became more viable because of the end of Moors Law. Unlike the past you get top performance from your server many years (at least when you aren't in the AI stuff ). In general its something which need to be decided on case to case. In this case I think it makes sense because of the stable user base from the legacy products.
It only makes sense to move to the cloud if you can fire 90% of your infrastructure team. Unfortunately, it seems like your infrastructure team now needs to learn server hardware, networking, and now new management tools to do their job. The only part of the job that you’ve eliminated is the physical unboxing and installation of servers, which is the cheapest part of the job.
And a great place to get people in on the ground floor if you want to hire people who may come from non-traditional backgrounds. But hey, diversity is only skin-colour deep in corporate america.
Rewrote my Ruby service to Rust, from 5 ms got down to 1.5 ms p90 response time. Not a big deal, especially given that it took 20 times more time to write this code in Rust.
No one talked about rust here, anything is faster than ruby
with Rust, you pay the cost upfront
@@tonyhart2744 absolutely, given that 99% of startup ideas fail, you don't want to pay x10 upfront for development)
Maybe it would be more interesting, and specially almost as fast by rewritting it in C#. I assume that, depending of the complexity of your service, it would had cost 3-5 times instead of 20 times, but I think you should consider if you had already proficient with Rust before writting, you can't just expect to take a completely new and fundamentally different language than you're comfortable with, and develop it as fast as with your primary language.
Sounds more like a skill issue. You're probably better at Ruby, so it doesn't take as long. Although Rust absolutely does take longer to develop.
POV: Netflix engineer is confused that people don’t need AWS.
Amazon reduced their own cost by 90% of operating prime video monitoring by moving their AWS micro service architecture to a monolithic architecture, running on EC2 (which means the stack could also easily run in data centers with dedicated machines). They published an article about it you should look it up!
i have read it on stream and its on this channel somewhere!
@@ThePrimeTimeagen Oh, cool, I didn't know! I'll look it up I'm curious to hear your thoughts on it !
is not that one that went fromserverless to monolith?
The funny thing about serverless is that it's pretty much what we did pre-2000, with a bunch of PHP and Perl code being called through CGI. And then VMs came and we were like: wow, now I can run the modules and versions I want, and not what the ISP has!
if they rewrite all that ruby in rust they could serve all their customers in a raspberry pi
FACTS !!!
They'll die of age trying to write real world projects in Rust
Rewrote my Ruby service to Rust, from 5 ms got down to 1.5 ms p90 response time. Not a big deal, especially given that it takes 20 time more write this code.
@@simpl1gdepends on the size of the project 😅
I find that writing apis in Rust + actix is way simpler than other languages once you get the gist of it
@10:20 The whole "when you build your company around a product someone else offers it makes me hesitant" thing is funny to me in the serverless conversation, because the companies I have worked for are built around a WHOLE lot more than just AWS lambda. E.g. they've been fundamentally dependent on top of services like Salesforce, financial service providers, etc etc etc. Not to mention code that is dependent on all kinds of libraries and frameworks that rot and go sour all the time.
Salesforce is such garbage, I hate using it.
Thank you for attending my Ted Talk.
@@CottidaeSEAsalesforce developer here. Company got sold on salesforce. It’ll be up and running in 6 months they said… 3 years later, wishing we rewrote the legacy vb6 app with a modern language. Would have had it done in a year.
You can have cloudflare both http proxy, and scrub traffic, to your on prem networks. We've been doing it for years. If we shoved everything to the cloud we wouldn't be able to afford the compute, network, and storage, that our QA testing openstack cluster users. Much of the rest of it could be cloud hosted, but I wouldn't want to be only cloud. There have been at least a doze incidents of various cloud providers having hours of outages over the past several years where we were entirely un-affected because all core infra is on prem.
This is what all the pros do. If you're really nervous, you can hide IPs with a dynamic service that forever and always changes DNS records.
@@complexity5545For that you need an ISP that gives you different IPs each x time right?
@@kuroxell What else does your home internet do ?😂
Here's the bit not mentioned in the article. MRSK is a home-grown deployment framework that they are positioning as a competitor to things like Capistrano, Hashicorp Nomad, etc. Or maybe K3s or some other "K8s-lite" variant.
How much does it to develop and maintain a container orchestrator DHH? That by itself could make up the cost difference. :/
the core of efficiency in on-premise comes when you dont have to worry about cloud cost optimization, cloud security, building a ton of tools to manage cloud resources, cloud resource attribution.
security is much more relaxed when you run stuff in your own data center. like for example, you don't need last-mile security from the API GW to your ec2 worker node. small things like that keep adding up.
correct me if this is not correct, but I think you can still leverage the ddos prevention capability of a service like cloudflare, and have dns route to your own data centers. I dunno, maybe I'm wrong.
you are correct
Monolithic server architecture woeks fine for like 99% of stuff. Sure when yiure getting 5 hits a minth, the $5 VPS might be cheaper. But as you grow, that vps will go further than all these microservices. It also makes software so complex when it often never has to be.
It seems like an eminently reasonable strategy to me to start in the cloud, use all the free pricing models that are available, keep your application containerized/dockerized/k8'ized/serverless or however else cloud-native you want.
Enjoy the free tiers and auto-scaling while you are small and building your business, and then if you're in a SaaS model where you business has settled down into to a predictable level of service, and you want to migrate out of the cloud, it's just a matter of plopping that same cloud-native stack onto your own hardware.
I think cloud is good for prototyping. When the evolution settles and most of the rough edges are smoothed, moving to a on-prem is better.
I suppose I need to test this statement, but in my experience most Ruby services are bottlenecked by the database. (A huge part of Basecamp's AWS bill was RDS I'm sure.) They are just a front-end for shoveling data in and out and rendering a template. Ruby is also not uniquely slow compared to similar languages that are still "hip", like Python, and a lot of good work has been done to speed it up. Most Ruby services ever written are probably running happily on a single not very big instance serving a few thousand users at most. I prefer not to write it anymore because I am pretty much over any language without a good type system, but the misconceptions are irritating. DHH, although his writing style says "high school student", is right on the money about cloud and on-prem, having done both. On-prem is cheaper for the same hardware, and not really any more difficult to manage at large scale, but you lose flexibility and need a different skill set. Cloud seems simpler at first, but rapidly becomes not simple, just more flexible.
Not just ruby. Most of web services are bottle-necked by IO wait. Your language does not matter when your code is waiting for an sql query not hitting am index.
Execution wise. Pretty sure ruby and python compute at the same level, wait ruby has trouble though is that it’s memory heavy, not an issue for most services, but if you do anything the]at necessitate a shit tons of object memory allocation (anything data science related) that’s where ruby takes a huge hit. At least last time I checked.
Going with newest ruby+using it's brand new jit (yjit) which is written in rust (xd) give some performance boost.
I built an on-prem Kubernetes cluster before, it was for an air-gapped lab in order to run an application that would combine data from a bunch of internal systems. The hardware was very nice at the time, and the server and service plan from HPE ran to about $50k. A license to run our software in their lab was something like $250k a year.
Yup, kubernetes is the way to go. Don't buld for proprietary cloud. Build for Kubernetes and run on commodity servers. On Prem, Owned or Leased Data Center, Managed Kubernetes on cloud or self managed Kubernetes on VMs obtained from cloud
DHH said more in the first minute than he did in his entire dynamic typing article
somehow he didn't just expound on nothing, but made great points!
That ending...beautiful 😢
With that said, I was talking to someone who was running a small company and hosting everything on-premesis. They're mainly business facing and host internal tools and such, and he swore by it. I know this doesn't count for much but I'm thinking maybe there's merit here if you know your load isn't going to be enough that you have to think about scaling globally to thousands/millions of requests per second?
Say what you want but the focus on privacy makes me like this company very much. Hey is a very interesting idea I've been using for two plus years now,very happy :D
That's why I love Go with serverless. As long as the provider satisfies the normal http.Handler interface (like Vercel, GCP, etc) you could easily move to another provider. You can even build the subrouters inside of your function because you have the full request object.
Umm, why is that specific to Go? An application is supposed to be designed like that irrespective of what language it is being implemented in.
I'd like to know more about that, how did you learn?
@@kuroxell Experience I guess? I am a Software Engineer by profession and read a lot of technical blogs.
@@rochakgupta6116 it has nothing to do with the application design. It's the fact the standard library defines that interface. For example, with a Python lambda on AWS, you get a context and a JSON payload. If you move to another provider, you now have to deserialize to that new model. With Go you get the http.ResponseWriter and *http.Request objects directly. Now you can use anything that satisfies http.ServeMux as a router. This means you can use the same routing logic in a regular application vs a serverless function and never need to have different functions to check HTTP methods, path variables, etc depending on whether it's running as a function, as an app, running on AWS, running on GCP, etc. It can be the same everywhere.
Wait till their disks start failing in a couple of years, weird hypervisor bugs pop up, abuse (and an e-mail service has a lot of that) in one container creates problems in other containers, the CPU/RAM-ratio of the hypervisors is way off compared to the workload of the containers causing wasted and unused resources when either the RAM or CPU of the hypervisor is fully in use, the new scala of possible networking issues due to running your own switches and routers. The extra responsibility you get compared to running software in "the cloud" is always bigger than you'd expect and I'm afraid the cost reduction of running your own servers is mostly short-term.
Tom is a genious. Jsdl is very powerful. Jsdl is written in jsdl. It means you dont need a cloud server or an on premise server. It runs itself. In the future, servers wont run jdsl, jdsl will run servers
Wish more companies would go lighter on cloud services. Especially many of the ML services are dysmal and often prove inferior to something you could just do yourself with a few packages.
What, you know this?! On-premises & Cloud hardware, features, networking etc are 100% the same and they can still use cloudflare just like any firewall. There only difference is management and ownership of them. Cloud is On-premises but not soley managed, and owned by you.
I was psyched to see OpenFAAS come up in this conversation. Serverless doesn't have to be expensive. You just have to know how it works and set it up yourself if you want to save money with it. That said, still working on figuring out OpenFAAS...
Can you easily scale down to zero with OpenFAAS?
@@chris-pee lol I can't EASILY do anything with OpenFAAS at the moment, but that seems to be the main point of it. Haven't actually played with it for awhile.
a few months ago i was chatting with my ex-IT-departed lead and he was arguing it's better to have your own servers (and if it was up to him Windows-Servers, even if we didn't have any microsoft ecosystem apps, just because he didn't like to read what to do in Linux), I was arguing that for new-er project (even if they were rare) we should Digital Ocean droplets so that we didn't have to wait 1-4 weeks to get servers, especially since i was the guy that after i got the connection data, had to install everything on that machine because the guy that was paid to do it, the IT guy, was refusing to learn(80% of the time i was just copy pasting into the terminal from digital ocean docs)
And in this discussion he was telling me that he currently has to buy more cores and more RAM for the server because the JAVA devs were too lazy to do some refactor and stop waiting memory, not to mention that the database was using a IBM DB2 ecosystem where nobody that was working in it was actually capable of configuring/optimizing it (and this i found out when i told them to start doing load balancing replication, at least for read/writes because they were constantly hitting the max connections threshold and transitions, even for small things, were taking forever)
As a QUESTION: was i wrong when i told him my sugestions?
You were right but your issue isnt about cloud vs on-prem. Its more about being a Luddite.
Going raw-dog from the start has its merits, too: old $500 desktop with embedded graphics will carry you through demos to funding, good deal on 100kg of iron nearing end of life (replaced or shutdown within a year anyway) will not allow turning a blind eye to such important aspects as monitoring, capacity planning, recovery from hardware failures, data persistence, failure modes, stress testing, etc. No button to upgrade rds instance in 2 minutes to fall back asleep in 5 may bring attention to the schema before everyone forgets which features use what tables, how and why. Maybe some stuff will end up in hbase and other in hive/snowflake/whathaveya with pg binlog continuously imported into. Not having luxury of single day of frantic plowing once in 4 months, you’ll figure healthier plowing schedule, quick Thursday plow and another on Saturday afternoon, without urge to fix all problems accumulated over 4 months asap bc hourly bill is the same whether ur plowing or not. Forced to hire infra guys with longer beards, you get exposed to their poetic feedback on your architecture, choice of tools and growth strategies; such a treasure. And if grim reaper comes, hardware will go way before the desks, chairs or coffee machines, with less of a haircut for amortisation. Frequently halting creative flow of features to deal with infra sucks, but unlearning 1-click fixes for anything infra is no cake either.
"stop writing ruby" facts.
not only that rails is the slowest web framework in ruby as well. roda+sequel is 10 times faster than rails but DHH loves his own creation.
All those apps are written in slow rails
You forgot to account for developer time. That's why people use rails.
@@jimboxx7 If like Prime thinks Ruby is the reason they are spending so much in AWS maybe this time save isn't worth it at all ^^
The hard part of creating a Saas company is to make it become successful, not making the code run fast. Shorter developer iterations will help you get there. If you're working for BigCo with an already defined business model where economies of scale apply, sure a language that focuses on making your server run faster will make a difference. For all the medium to small companies out there though, time to market is way more important than time to compute.
@@jimboxx7 Oh, Makes sense !
A friend of mine works for a company that is half way through doing this, and now their ops team is bigger than their dev team and theyre already considering abandoning the plan altogether. AWS seems expensive, until you start managing everything (including the hardware) yourself. You can definitely end up spending alot with AWS (especially if you actually listen to their solutions architects), but going fully self managed is a pretty drastic solution.
Source: trust me bro
The author has posted an article called "Why we're leaving the cloud" back in October 19, 2022, which goes a lot more into the technical details and explains a lot better why the decision really made sense.
08:15 I think you might be the only person who can strengthen their argument by waving hands around and saying, "You know what I mean". I know nothing about Ruby, but I know what you mean.
its a gift
These are legacy services. Are they really going to scale up rapidly? It seems like a 2 week lead time on needing better hardware is fine. I suspect they will never need new hardware unless they have really bad programmers.
you will always need new hardware. hardware fails, dust happens, moths, etc etc. things just go wrong
@@ThePrimeTimeagenso they just have to keep some extra servers and drives in stock
@@ThePrimeTimeagen - That's true, and that can happen in a way that''s a surprise. Hopefully you've engineered for the necessary redundancy to deal with this. If you have, it seems like a two week lead time on replacement hardware represents an acceptable risk in having a sequence of failures that exceed your redundancy planning.
You have a lot more experience with that level of thing though than I do I suspect.
It’s still cheaper even with all the redundancies
@@invinciblemode - I worked for ServiceNow in Devops for a year. They have their own cloud. They have a datacenter in every region they service. But they were wanting to go into markets where they didn't have datacenters yet without the cost of setting one up until the market was large enough. They contracted through (of all the stupid places) Azure.
But, they were very committed to their own cloud and their own datacenters. Partly because it enabled them to control compliance with regards to data location. But also because it was a lot cheaper for them.
But, ServiceNow is an interesting case. They started before AWS and the like were a thing. And they're big enough to have significant economies of scale.
The place where I work recently moved to cloud due to scale issues with in house data centres not able take the load. But now the cost is so high they’re taking measures to reduce load on data warehouse. We’re into Analytics processing more than 4 trillion records per year for our customers before moving to cloud, now it’s even more.
I really don’t know which option is better but till the time you are able provide optimal service and you are not underwater whatever suits your needs is best.
RE the Plough confusion...
What the guy said actually makes sense if you read it. He was referring to the machine itself in his analogy i.e. if you only need a Plough machine 3 times a year, why buy one and store it in a barn most of the year, rent one 3 times a year instead.
I obviously have no idea about farming and ploughing and plough machines, so whether that is actually sound advice for the real world of farming I have no clue.
He was talking about storing the plough itself in the barn
3:38 "How does it take this much compute power to run some legacy users?". The paragraph above says "But the big move was Hey...". That compute power is for all their apps, not just legacy stuff. Which totally changes the value proposition.
My father was complaining that his CIO wants to migrate to AWS because costs will be high both in the short term and the long term, but apparently, all these people care about is sounding cool to investors.
By the way, u and v used to be considered two variants of the same letter, hence the odd naming of w, which used to be written as uu. Like c, this letter would take on slightly different pronunciations in different contexts due to Latin sound changes.
we should treat slow languages as we treat crypto, Electricity and resource sucker, I wish there were Energy tools for x86_64 like arm has which tells energy usage which writing programs
the cloud king came down from his thrown to save all the lost repos whose original sin was only that they existed not in the cloud. he took on all the git blame, and merged all who accepted trunk based cicd into his master repo and now they dwell on the non-cloud for eternity.
I really enjoy your videos and there are areas I feel like an idiot watching your videos. Then watching this I realise how you know so much in an area then know enough in another area to think you understand when you don’t. Still a good video
Well "we need" it and a "couple weeks away" is something very familiar to anyone who does software development. I get why it something hard to come to terms with when it comes to scaling and reliability though :D. I think if your business is very predictable you can still benefit from an on-prem solution seeing how much it costs to run enterprise infra at scale in the cloud.
I used to work for GCP. I think clouds are a great service for rapid scaling, build/dev work, disaster recovery, a few other cases. I also think that once things settle down, your stable system should be run "on prem".
“Stop writing Ruby - get perf fixed” I laughed out loud 😂 thats a good burn
You can do hybrid cloud . Firewall cdn can remain in cloud then vpc a on premises server
IDK why but this discussion gets me very excited.
HEY I LOVE RUBY SAY YOU ARE SORRY
Ploughs go in the Shed, crops and livestock go in the Barn. Also it takes more than 3 days to plough usually.
watching every video until I become a senior engineer
you would be rather poor one
cloud is for loads that are highly volatile. if your business has huge load peaks and your architecture can scale then you can save lot of money by running it on a cloud.
the whole idea of aws started from amazon noticing that their own data center servers run 70-90% idle and how to rent that idle capacity to outsiders. the same will easily happen to anyone running their service in own datacenter if the load varies a lot.
0:45 They brought heritage services off of the cloud. Services which they aren't selling anymore, but providing service for to existing customers. This means that they aren't going to grow, ever. And if hardware requirements increase, they will know, well in advance, just by looking at the numbers of how much of their existing hardware is being used and drawing a trend line. This is why they moved away from the cloud. There is no point in being prepared for a sudden spike in customers and thus need for more hardware, when you are not selling.
I imagine this guy was too young to see how complicated things were before cloud ? spending nights in DCs, power outages , slow scaling , hardware issues , hardware technicians coming on-site to fix while stuff is down, storage and link packups , power backups ? All this needs to be done by hand not through a comfy api . He'll be back in the cloud in no time.
When load is predictable, get off the cloud. Cloud is great for startup and massive growth but then once you level off and can be predictable about things (and dont have ticketmaster type spikes)... get off the cloud. Seems logical to me, but not for everyone.
On-Prem will affect businesses that are regulated and required to store their data within their locality. I am sure there are other uses cases but this is usually the primary use case that I have encountered.
0:32 WTF? You can use CloudFlare in front of your self-hosted stuff. (Poe's law might apply.)
Boss: What're these costs?
IT: In-housing some operating expenses while saving 100-400 million in 3 years.
Boss: *begins spasming on the ground* Get, GET OUTA HERE AND DO IT!
Roko's Acolyte: The Basilisk will spare your worm life when she is born.
the 25k users is talking about the legacy products, he then moves on talk talk about HEY their latest product and thats when he starts talking about the new hardware. You have misunderstood / misread.
Dude, you killed me with the ruby part 😂
pffftt you made me spit out my coffee with "the man with very little opinions"
3:15 tens of thousands using the legacy apps that 37 signals are committed to maintaining even tho they don’t actively market or sell those products anymore. As for products they currently sell, basecamp alone has more than 15m users.
When I die, please don't send any angels to sing for me. All I want to hear the sound of this guy ranting about something.
Ruby is still better than Javascript
7:18 the cheapest cloud vendor that could meet my company's baseline performance and user experience requirements were 16 times more expensive than co-locating hardware and spending a few hours a month for the same end user experience.
The cloud solutions we tested have had 10x the amount of downtime and time with unacceptable levels of performance compared to the colo servers in the last 6 years.
Most companies that are of similar scale spend more time managing their cloud Infra than we do for our dedicated infra.
You can put cloudflare or cloudfront in front of your private server. It's not fully roll your own, but most of the way there.
"The man with very little opinions"
Pot meet kettle
Compare to companies that stop supporting a product as soon as they decide they could make more money selling something else.
1. Just contract with Cloudflare to front the system to take care of CDN needs as well as your favorite DDoS. 2. Do some capacity planning during your monthly tracking report… that will give you more than a few weeks heads up for new HW needs.
Imagine thinking the only way to prevent DDOS is Cloudflare or whatever is called. If you think the same we're def not the same coder.
STOP RUNNING CONTAINERS ON VMS!!!!! RUN CONTAINERS ON BARE METAL ALREADY!! JEZUUUUUZZZ
Depends on type of task they are doing, replacing ruby with something like go might give you small percentage of gain, if all their computation lais outside of language.
Obviously they can invest money and replace ruby with go, it is easly said. this takes commitment, so overall i don't get why he is focusing on this point.
One thing is certain tho, there is value in avoiding cloud at some point if you are spending 3.5 milion per year on AWS maybe it is time to have your own datacenters.
What's this machine doing? Ruby.
FWIW my understanding is that not only did they bring home the legacy apps, also their main businesses of HEY! And Basecamp, which server far more than 25,000 users.
One take away I see. 37signals is also most likely lean enough that the purchase process for new servers is not the red tape hell that exists at big corporations.
Would be funny to see Prime's reaction to this article now
"We need it when usage increases past 50% of max load."
I dont actually know if you could pay me enough to move a heavily used email server on prem. Thats truely what nightmares are made of
I just want a federated open source PAAS abstraction layer. Take all self hosting companies and let them run their stuff on each other’s hardware through this abstraction layer that becomes easier to hire devs for because it’s a standard. It would protect against DDOS just as much as AWS if enough self hosting companies joined in.
Man you missed the whole plough and barn analogy. A barn in this case is the storage space for the plough lol.
You got it confused with a silo maybe.