Our Colocation Hosting versus AWS Costs Compared 2020 Edition

Поділитися
Вставка
  • Опубліковано 23 сер 2024

КОМЕНТАРІ • 235

  • @ServeTheHomeVideo
    @ServeTheHomeVideo  3 роки тому +41

    Quick one here. I saw there is a mislabeled slide at 14:22 where it says 1-year instead of 3-year. Making a note here and put it in the table of contents.

    • @nicholaswjamrock
      @nicholaswjamrock 3 роки тому +1

      you did not factor in the resale of the old hardware, that too will affect cost

  • @MarkRose1337
    @MarkRose1337 3 роки тому +23

    I administer a site on AWS that does about 2 billion page views a month, so medium to large scale. I have some observations about running on AWS. The amount of EC2 we use would fit in a single rack in a datacenter and burn under 5 killowatts.
    The biggest benefit is having a team of people to manage the hardware, in multiple datacenters. Pretty much all our servers are cattle, and if one system goes down it simply gets replaced. You could take down an entire datacenter and we would know, but we'd be up. We just don't have to worry about hardware. The second benefit is having the ability to scale when required, without having to provision everything for peak load. Five times our traffic in ten minutes? Easy. And we only pay for the additional hardware as we need it. The biggest downside to being on AWS is the egress bandwidth fee. Egress bandwidth and CloudFront are negotiable when you have enough traffic.
    As far as using Epyc for running websites, I've been disappointed by both Naples and Rome on AWS. The cheaper price is largely a false economy. Both Naples and Rome suffer from high memory latency which reflects in much higher CPU usage. For instance, a c5a.large (Rome) will run at 50% higher CPU than a c5.large running the same mcrouter and PHP load. And an r5ad.2xlarge (Naples) running memcache will use 500% of the CPU as a r5d.2xlarge. Yes, five times! I've seen similar poor performance running Java applications. Where I have found value in the Epyc instances are situations where the system is lightly used: stuff like Kubernetes master nodes, dev & qa environments, and so on. I hope Milan performs better.
    People say AWS is expensive, but that's true only if you don't value your labour and can tolerate downtime if there are problems in your data centers. I can't imagine the number of people we would have to hire to have 24/7 monitoring of infrastructure in multiple datacenters to support the uptime promised to customers. It's an acceleration to not have to worry about hardware, especially for online-focused purposes. That being said, there's still a role for the traditional datacenter for other purposes. If I were running a render farm, I'd almost certainly build it out where power is cheap and not run it on AWS, for instance. Likewise if I hit sufficient scale that some cost such as bandwidth or file storage became cheaper for me to do on my own (exabytes). Netflix is famous for being entirely on AWS, except for their CDN, where the bandwidth savings are worth it. Similarly Dropbox moved off AWS when the storage costs became cheaper to in-house.

    • @gravypod
      @gravypod 3 роки тому

      Cant you get the same features from AWS in colocated setups with something like Canonical's MaaS? All of your machines are still cattle they're just way cheaper.

    • @MarkRose1337
      @MarkRose1337 3 роки тому +3

      @@gravypod I took a quick look at MaaS and it seems to be about running your own data center. One of the biggest advantages about AWS is not having to run data centers or deal with physical hardware. To use an analogy, it's cheaper and easier for someone else to own and operate a fleet of jets while we buy tickets to get where we're going. Sure, if we had to move a lot of people in the same direction frequently it may make sense to buy a corporate jet, but we're often heading in different directions and need different capacities at different times. And we're also not in the business of maintaining jets or maintaining server hardware.

    • @wiziek
      @wiziek 3 роки тому +4

      @@Traumatree Dude it seems you don't really know about AWS or cloud, let one server or networking hardware.

    • @wiziek
      @wiziek 3 роки тому +1

      @@Traumatree I'm not using AWS but unlike you I'm aware that every method have their advantages and disadvantages but hey, you are stubborn and ignorant one, not me. One thing is being catious but you are ignorant and paranoid on top of lacking knowledge and expertise :). You even mentioned 56k modem alongside AWS access, what was that supposed to mean? I doubt you could even comprehend basic concepts of how AWS or other familiar system works "under the curtain".

    • @compternerd1010
      @compternerd1010 3 роки тому +1

      As someone who was the systems engineer in the data center for years people imagine these perfect racks where everything is identical. I managed 20 full racks in 2 geographically diverse locations. 14 SANs from 4 different brands , fiber and iscsi networks, OOB networks, Dell, super micro, multiple generations of servers. Brocade, Cisco, force 10 switches. 40g networks, 10g. Windows hv, esxi, centos kvm, kubernetes for virtualization. PDUs. Every consumable needed to be managed on hand for replacement.
      In a big company it takes months to get a aproval for new hardware. Normally over a month for the vendor to configure and ship. Then you schedule your time which during covid was impossible with limited access to the facilities. If it was a remote site flying there. Then you hope and pray the equipment was configured correctly and everything works. I've spent countless months back and forth with vendors trying to get them to understand their stuff is broken.
      All this to say that people do not count labor costs in on prem solutions. As well as warranty contracts. The time involved to troubleshoot a large environment and setup new equipment is huge. As well as monitoring all of it for failures. It would take sometimes a year from submission of quotes to a server entering production. When I went to become a cloud engineer it was great being able to use infrastructure as code to just have it setup in a few minutes.

  • @FWVO
    @FWVO 3 роки тому +59

    Generally it comes down to this: "You either gonna do it your self or you gonna pay the big bucks."

    • @journeyboy4507
      @journeyboy4507 3 роки тому +17

      This is true if you are looking to just lift and shift. But the key is what he mentioned in the middle of the video; he wasn't willing to change his operating model. For example, if he were to host this site in AWS in an S3 bucket (which would host the content and serve traffic), his AWS costs would be a fraction of his co-location cost by a LARGE margin. Not only that but you can start to get caching at the edge with CloudFront; i.e. you can start reaping the benefits of feature enhancements instead of just cost savings.

    • @magfal
      @magfal 3 роки тому +1

      @@journeyboy4507 minio works well for static sites and works just as well with CDNs.

    • @mikepb78
      @mikepb78 3 роки тому

      @@journeyboy4507 agreed laziness and stupidity. And that is fine for their own money. But don’t make a video about how smart you where for not using AWS in a way no sane person would.

    • @GlennEggleton
      @GlennEggleton 3 роки тому +2

      if you do it yourself you're paying someone the big bucks to maintain your on-prem infrastructure.

    • @magfal
      @magfal 3 роки тому +3

      @@GlennEggleton The price of a decent admin vs the cost of the cloud is nothing once you reach a production workload for quite a few type of applications.
      I suspect a lot of companies would migrate off the public cloud if they took a long hard look at the economics using current capacity per dollar as a yardstick instead of what was likely Nahalem or Westmere when they first started their march to filling Bezos's pockets.

  • @wilcosec
    @wilcosec 3 роки тому +49

    TLDR: don't move to the cloud for cost savings, especially for the small to medium sized business with a more traditional web stack.
    IMO, moving to AWS is great for business agility and where extreme elasticity or high availability is needed.

  • @MrBillrookard
    @MrBillrookard 3 роки тому +13

    I was doing some web-dev stuff years ago where I had a co-located server about 200 miles away. I wound up bringing it all in-house, literally, as I got a business class connection to my home, some static IPs assigned, and built a small rack in my basement and have been serving those sites for about 10 years now (the website has been active since 1997). Considering my internet access is provided by the business connection, I pretty much pay out of pocket for the static IPs, plus electricity. I can thus expand the servers as I wish, add storage as I wish, add RAM as I wish, and if something goes down, I walk down a flight of stairs to figure out what's going wrong.
    I've literally saved thousands a year doing this, and while I understand the advantages to not having to deal with the hardware layer of things, it's something I enjoy tinkering with - and of course saving money is always a good thing.

    • @MarkRose1337
      @MarkRose1337 3 роки тому +6

      Nothing beats serving from home if it's a hobby project and you enjoy it!

  • @colinstu
    @colinstu 3 роки тому +92

    give us all you colo stories and lessons learned!

  • @GeoffGroves
    @GeoffGroves 3 роки тому +21

    As partner and CEO of an AWS consulting partner, i thought this was really really well done and very accurate. Nice job!

  • @pepeshopping
    @pepeshopping 3 роки тому +54

    Crazy amount of money for what the web site is/does.

    • @TurboVisBits
      @TurboVisBits 3 роки тому

      wordpress plugin here, plugin there... 48ghz to serve 5 users later..

    • @ozziesheppard17
      @ozziesheppard17 15 днів тому +1

      I'm curious to know if he could just fit this website into a 1 or 10 gbps connection why he doesn't just host at his offices. His cost of downtime on the site would have to be very high to outweigh 10-20k per year of savings.

  • @noahlistgarten7832
    @noahlistgarten7832 3 роки тому +18

    Another excellent video! I don’t know if this has been covered in an article before, but would love a video describing how the website is currently set up/configured, from networking/firewall to server to hypervisor(s) to VMs to containers to CMS (as security allows, of course).

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +6

      We have not done that in a long time. In some of the old 2013-2014 colocation pictures we annotated pictures of the installation with functions.

  • @ws2940
    @ws2940 3 роки тому +21

    I definitely would like to hear about some of the lessons learned about co-location since 2013. Its always interesting to learn how decisions were made. Based on what was needed, etc.

  • @serverguru6312
    @serverguru6312 3 роки тому +1

    As a hardware guy tasked with investigating MSP and CSP business models this week, this is timely :) Thanks for the continued content Patrick.

  • @gdrriley420
    @gdrriley420 3 роки тому +36

    can't wait for zen3 EYPC. Racks and racks of it will be in soon for the data center I work in.

    • @AchwaqKhalid
      @AchwaqKhalid 3 роки тому +2

      Except that...
      ua-cam.com/video/kNVuTAVYxpM/v-deo.html

    • @rayjaymor8754
      @rayjaymor8754 3 роки тому +1

      @@AchwaqKhalid whilst that obviously does suck... in the case of Riley working in a datacenter that's not gonna be a concern for him lol

    • @gdrriley420
      @gdrriley420 3 роки тому +1

      @@AchwaqKhalid Meh, Doesn't matter for me. I don't even get to pick configs, those got picked long ago with the contracts.

  • @steven44799
    @steven44799 3 роки тому +5

    Colo pricing and what you get for your monthly $$ must vary region to region, for our base monthly we get the 42u rack + 2 kW of constant draw (A + B combined), every extra kW above that has a fixed fee, data is not supplied, you pay monthly for a fibre cross connect to your supplier of choice and they charge you monthly for what ever speed/transfer you agree upon.
    We moved to Colo after looking at how much refitted/rebuilding our server room would be to add generator backup, gas fire suppression, cooling and then the maintenance/power costs. We worked out that we could move into a Colo and pay for years worth of rack space that comes with all of that and more for the same cost but now it's monthly instead of up front. Hardware costs would be more, but in return the original server room becomes a DR/secondary location and you are only buying a few extra switches/routers as the bulk of the hardware was needed regardless of location.

  • @brannancloward
    @brannancloward 3 роки тому +1

    Very interesting breakdown. I really appreciate the helpful information! You have answered in 1 video every question I have ever had and could not find the answer to. THANKS!

  • @mikepb78
    @mikepb78 3 роки тому +24

    Sounds like site that could done using static site generation and host serverless. The would cut it down to a fraction of this.

    • @zeytelaloi
      @zeytelaloi 3 роки тому +3

      it's just a complicated blog isnt it?

    • @mikepb78
      @mikepb78 3 роки тому +9

      @@zeytelaloi yep seems a bit ridiculous and old fashioned to have dedicated hardware, just have a static site generator and publish content to the cheapest provider. Even if the content changes on an hourly basis it should not be problem.

    • @pointinpolyhedron
      @pointinpolyhedron 3 роки тому +7

      @@mikepb78 Yes, this seems a lot ridiculous. Investing in a CDN like CloudFlare along with a good static site generator should bring this down to a lot.

    • @mikepb78
      @mikepb78 3 роки тому +2

      @@pointinpolyhedron yep. Not only that, it be a lot faster and resilient to failures. Taking a cpu intensive task and provisioning it for the peak will be a lot more expensive on AWS, then a bare metal server. But if that cpu task only has to run once per change and be published on cloud storage you go from thousands of dollars to cents.

    • @bigpod
      @bigpod 3 роки тому +2

      @@pointinpolyhedron evne if current stuff stays you can cut down on cost just by using cloudflare for caching of certien stuff

  • @Whipster-Old
    @Whipster-Old 3 роки тому +3

    Awesome video. I see so many organizations put stuff in the cloud and wind up surprised about the cost.

  • @D_T244
    @D_T244 3 роки тому +3

    Interesting video! For AWS, have you considered reserved instances only for "valleys" and using on-demand in auto-scaling group for varying traffic? This way you can save costs on idle hardware. Another suggestion I have is to look at Savings Plans, it's a new version of Reserved Instances that offer greater flexibility and same savings. I also highly recommend looking at Graviton2 ARM instances that are 3/4 the cost of x86 machines with similar performance.
    Pro tip, to save on AWS outbound data charges, take a look at Lightsail instances, those come with Terabytes of bundled transfer (both in&out).

    • @MrAtomUniverse
      @MrAtomUniverse 3 роки тому

      If an EC2 instance can speak to a lightsail instance via private IP, it would help save alot otherwise, not really

  • @MyAeroMove
    @MyAeroMove 3 роки тому +6

    Very interesting topic. Unfortunately haven't got much details from traffic side. Could you please share approximate numbers? Or even do separate video about this (cdn, caching, etc)

  • @GGBeyond
    @GGBeyond 3 роки тому +7

    I went with co-location for my web projects because of the lower overall monthly cost compared to AWS and I'm able to pick up plenty of used servers off of Ebay for really cheap. I run a LAMP stack with Memcached and Elasticsearch.

    • @jfbeam
      @jfbeam 3 роки тому +2

      AWS is more for people willing to trade money for convenience. With AWS (et.al.) you can pay for exactly what you need, precisely when you need it. You don't have to think about space, power, cooling, or hardware failures; you're paying someone else to do that.
      (for the record, depending on the size of your operation, co-lo can be more expensive than hosting it yourself. even more so if you have a commercial or industrial area to run it -- 3-5x cheaper power.)

    • @GGBeyond
      @GGBeyond 3 роки тому

      @@jfbeam Thanks, that's good to know. I have a very small budget and I can easily manage the hardware and network myself, so it made perfect sense for me to go with colo. Maybe I'll use AWS someday when I need that convenience.

  • @ewenchan1239
    @ewenchan1239 3 роки тому +7

    I would definitely be interested to watch/listen to the pros and cons of self-hosting vs. Amazon hosting.

  • @wywywywywywywy
    @wywywywywywywy 3 роки тому +6

    Just in time! I'm actually costing colo vs cloud for a client when this pops up :)

  • @rougebarbu
    @rougebarbu 3 роки тому +1

    Did you try to calculate the cost of renting bare metal servers at a facility like OVHcloud. They take care of all the hardware/network issues for you. We are running 6 servers with AMD 7371, 256G RAM + NVMe disks for about $2.5k/mo. Including bandwidth.

  • @SimonYusupov
    @SimonYusupov 3 роки тому +2

    Can you make a cost estimation of colo vs aws cloud native services? So, not hosting in EC2 but rather refactoring your site to use s3, cloudfront, lambda, etc. This would make more sense of comparison since you'd be comparing the best of colo vs the best of aws, as opposed to the best of colo vs the worst of aws.

  • @georgehenes3808
    @georgehenes3808 3 роки тому +5

    If you ever want to feel better about naming things, you could always remember how Intel are doing at that game... Love the show!

  • @PaulJeffery81
    @PaulJeffery81 3 роки тому +15

    What would the cost and ROI for using a CDN to cache images and static content? I would like to see what the cost delta is between co-lo bandwidth and CDN costs.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +6

      Just to give you some sense, one of our lab data center providers (different facility) offers a 1Gbps 95th on a 10Gbps port, a 42U rack, and 208V 30A for under $2000/ mo. They have a lot of upstream bandwidth to different providers. We have been quoted less for larger commits. We do not use that much bandwidth for the hosting side but 1Gbps (and you technically get more than that with 95th) is about 328.5TB/mo of data transfer. On CloudFront CDN that is over $20,000 without any other CDN costs included. There are other CDNs, but we were focused more on AWS.
      If you look at how Equinix bought Packet, a big part of that is that other organizations have figured out that outbound AWS bandwidth is very expensive.

    • @MarkRose1337
      @MarkRose1337 3 роки тому +8

      @@ServeTheHomeVideo If you're doing 300 TB in monthly traffic, you can get vastly better pricing than $20k/month from CDNs. Once you hit 10 TB/month the various CDNs will start sharpening their pencils if you're willing to commit to a minimum usage for a period of time. If you're consistently doing over 10 TB/month it's worth shopping around and negotiating. Granted, if you're already paying for and underutilizing that gigabit connection, it's free, and no CDN will match that.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +9

      @@MarkRose1337 At the current growth rate we will be doing 10TB/mo by Dec 21-Mar 22, but we are vastly underutilizing the connections we have. This project was started in 2013 when $100/mo or so would have been a go/no-go decision-maker since STH had little to no revenue then.
      Also, of course, our web hosting infrastructure is a very small component of what we run in different data centers these days. The GPU servers we test are over 1kW per U these days, so I worry more about power than anything else.

    • @bens1058
      @bens1058 3 роки тому +4

      @@ServeTheHomeVideo you are conflating your potential data transfer given your connection speed with your actual data transfer usage. Your logic here is all mixed up. 🤪

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +1

      @@bens1058 I was just using that as an example since the question was about colo BW costs. We are using actual not potential for the actual cost comparison.

  • @itleyland2030
    @itleyland2030 3 роки тому +11

    The one item to factor in is CAPEX vs OPEX and how you can utilize those expenses on your taxes. At one location I am apart of we are now deploying AWS Outposts for three years so we can recognize it as a three year CapEx deduction. Even then AWS is still more expensive than keeping the site on prem but the advantages of the AWS ecosystem justified the costs. There are technical reasons why we cannot put our whole stack in AWS but one day it may happen. By then the need to still have CoLo space may change.

    • @MrAtomUniverse
      @MrAtomUniverse 3 роки тому +1

      I don't quite understand the taxes part , so if it's an expense , u can write it off on taxes vs if it's a large investment on machine ? Where do i read up more on this

    • @Lindguardian
      @Lindguardian 2 роки тому +1

      @@MrAtomUniverse its a bs argument to justify higher costs. it might be important to impress shareholders that your company spends the money "right", but in reality the less money you spent the better.

  • @jeremybarber2837
    @jeremybarber2837 3 роки тому +3

    Way to tease us about Milan... A great video and super neat to see the hard facts on both.

  • @valdius85
    @valdius85 3 роки тому +4

    Thank you.
    I understand nothing but I'm glad I've heard that :)

  • @johnhanley2431
    @johnhanley2431 3 роки тому

    Excellent video. Your effort to be fair and balanced is good. AWS, Azure, GCP, IBM, Oracle have strong benefits, co-location has benefits and on-prem data centers have benefits. The key is that few openly share the pros/cons of each. Post more videos, I appreciate your perspective.

  • @Kvantum
    @Kvantum 3 роки тому +17

    Have you look at the numbers to run the same workload on Azure?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +15

      Not yet. Perhaps next time we will do that unless a lot of people ask for it.

    • @GuruEvi
      @GuruEvi 3 роки тому +3

      ​@@ServeTheHomeVideo Azure and Google both have much better pricing than AWS. For AWS to be useful, you really need to be a big outfit and negotiate a contract, so you can negotiate not paying any bandwidth costs, given you spend a few hundred k per year.
      This is the same for hardware, Lenovo/Dell/HP can really go toe-to-toe with SuperMicro on price but again, you need to be able to commit more than a few devices yearly. We've just gotten a complete server for $15k with 384GB RAM and dual Xeon Platinum 8268 CPU's - the street price for just 1 of these CPU is currently ~8k.

    • @TotlKaos
      @TotlKaos 3 роки тому +1

      @@GuruEvi We are a large enterprise and we still cannot negotiate with Amazon. What started out as a new project migrating and upgrading our core system went from $10k a month to now over $280k a month. In the mean time we have a large colocation with our full enterprise hosted there. Still don't know why Amazon was even considered but then again luckily I don't pay the bills.

    • @unusyousuf9506
      @unusyousuf9506 3 роки тому +1

      Yes please do one for azure

    • @jfbeam
      @jfbeam 3 роки тому

      @@TotlKaos [censored] For $280k _per month_ I can build a datacenter.

  • @jonathanschober1032
    @jonathanschober1032 3 роки тому +2

    You didn’t really breakdown the EC2 costs and just gave us blanket large figures. I understand that you didn’t want to do any re-engineering, but even if you did a little bit by using something like AWS CloudFront in front of your CMS and throwing away all of your in-memory cache you could save tons on your compute/EC2 and pay exactly the same (slightly less) on bandwidth.
    I feel like this is a traditional data center guy looking at AWS as a collection of VPS. That’s the wrong way of looking at AWS. AWS doesn’t just provide racks of servers, it provides services that are designed for cloud. If you don’t want to use them, then using AWS would be a wrong choice.
    Source: I’m a DevOps engineer at a publicly traded company

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      The CMS is only part of the hosting for STH. We have forums, ad servers and such as well that do not fit as well into the CDN model. Of course, we could rely on external ad servers, and change the entire application level, but that changes the operating model again.
      Totally agree we could change things, as mentioned in the video. We are looking at hosting the application, as-is, in colo versus AWS.

  • @andibiront2316
    @andibiront2316 3 роки тому +2

    Off-topic. I was reading the SFP+ adapter module buyers guide (just bought a crs317-1g-16s+rm for my home lab), looking for cheap alternatives, and I was wondering if did or could do something like 10GbE round up (BCM57810, ConnectX-2 and 3, X520, Chelsio and so on). Thanks!

  • @jgurtz
    @jgurtz 3 роки тому +1

    Was happy to discover this chan and very interesting video and numbers here, thank you! This is an underreported take IMO. Many people could do much better getting out of the cloud even if they don't understand hardware; hire a consultant to spec/build the system(s). It's a one-time cost! Also a very interesting take on VM vs Containerization: the economics get complicated but I think it's worthwhile taking a deeper look in a similar way to this aws vs colo analysis. A big factor is: how much is the cost of time doing software maintenance and managing the configuration of VMs? There's a lot of ongoing hours spent on OS upgrades and patching either for security, software going EOL, or just trying to get the latest features. Getting to a place where most/all servers are in containers and having a good CI/CD automation can lower this cost dramatically. Of course there is a huge learning curve but it is super cool to look at.

  • @torak456
    @torak456 3 роки тому +1

    How would you handle the opportunity cost of partial upfront? Prepaying $20,000 reduces the total cost, by reducing the monthly bill quite substantially, but what else can you do with that money? Expanding the site, funding a project, etc. may need to be put on hold while the monthly savings replenish the coffers, but adding new content could do that more effectively. Of course this isn't necessarily the case with STH, especially since you choose colocation, but still a worthwhile exercise for the viewer that may be making a similar decision.

  • @YehudaKatz1
    @YehudaKatz1 3 роки тому +4

    Definitely want to hear colo stories.

  • @the_DOS
    @the_DOS 3 роки тому +7

    Great analysis. Although, like you said, with the right developers, you can do things for thousands less.

    • @patrickd9551
      @patrickd9551 3 роки тому +1

      while spending ten thousands on said developers ;)

    • @ThisIsTheInternet
      @ThisIsTheInternet 3 роки тому

      @@patrickd9551 Lmao yeah, what. Development work does not save money at this scale.

  • @Darkk6969
    @Darkk6969 3 роки тому +3

    I am wondering what would be the true cost if you are running several VMs as there is a licensing cost if using vmWare. We use Proxmox as it's free and subscription is an option. So for co-location have to keep both software and hardware costs in mind.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +4

      We started using Proxmox VE in our colocation back in 2013 for exactly this reason.

  • @cedivad
    @cedivad 3 роки тому +2

    Nice video! No traffic data so let me ask it another way: is the hardware properly sized for the job or maybe a bit of an overkill? Like the fancy new database storage, isn't that much for a "tiny" website heavily relying on cache?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +6

      That is totally correct. As I mentioned, we are designed to run in the single digit percentage utilization. During peak days, we never go over 10-15% utilization. We have extra nodes specifically so that we have hot spares and we can run VMs on different physical machines. We are also 100% solving software problems with hardware. We were having issues with tail latencies some days, but moving to Optane for databases fixed that. Realistically, if we wanted to lower redundancy we could run everything out of one much larger and more expensive server and lower hosting costs even more, but there are also advantages to not worrying about that.
      We also have an entire spare node in the data center at any given time so if we have a severe failure, we can bring up that new node and migrate to it which greatly reduces the need to physically service machines.
      At some point you rely on luck, boots on the ground, a cloud provider, or extra installed capacity to deal with failures.

  • @ianperkins8812
    @ianperkins8812 3 роки тому +1

    Latecomer to this video, but excellent discussion of the benefits of colo vs cloud, in your particular use case.

  • @spotlessthoughful
    @spotlessthoughful 3 роки тому +1

    I agree, anything old school is usually a bit cheaper in a colo

  • @shanebauman6555
    @shanebauman6555 3 роки тому +8

    Would love to see some vlog style videos whenever you visit the colo. (if they allow it)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +3

      That is a current issue with all of the facilities we use. No photos/ video.

    • @LethalBB
      @LethalBB 3 роки тому

      Near universally prohibited.

  • @YeOldeTraveller
    @YeOldeTraveller 3 роки тому +4

    AWS is a big win if you have widely separated peak needs, and you can save enough vs buying the hardware that will service those peaks.
    For any application with a fairly stable load, owning your server is going to be better once you get to a certain size. Below that size, their scale advantage allows you to buy what you need from them for less than you can get it for yourself.
    AWS is also a win for a use case where you need something, but you have no ability to support it yourself.
    There are situations where you have a steady load where buying the hardware is the win.

  • @thatLion01
    @thatLion01 3 роки тому +1

    I bet alot of companies would save lots if they went to colo. This is an amazing video. Do you mind to explain this rush to the cloud?

    • @abhishekcherath2323
      @abhishekcherath2323 3 роки тому +2

      Imagine worrying about all this stuff if it isn't literally what your business is about. Consider say a molecular sim provider; on cloud with terraform, you have one team that just deals with all the infra stuff, directly from head office. Don't even have to *think* about going to datacenter, working with remote techs, buying hardware, anything.

  • @squid11160
    @squid11160 3 роки тому +5

    A few things jump out at me, but the main one has to be the gigantic DB requirements. Shard your DBs sooner rather than later. Overall it seems that your architecture is simply not really financially viable to be used on the cloud. I also didn't hear you mention what kind of high availability you have. What happens if the colo DC goes hard down. What's the cost to your business? It's important to note for the viewers that modern infrastructure architecture can easily be more profitable on the cloud if you architect with cloud in mind (loosely coupled infrastructure, high availability design, containerization, functions as a service, smart use of CDN...)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +6

      Answered a bit of this in other comments.
      The DB bit is not too bad. Optane just fixes tail latencies so we do not have to think about it anymore. You are correct it is a HW fix.
      On the HA side, in the main DC we have multiple nodes, snapshots, a dedicated backup system, and then bits like hot spare machines. If something fails, we usually just heal by absorbing the hot spare and then fixing/ replacing the failed bits. If the colo goes down, we have a smaller (frankly better sized) build in another facility using the older hardware from the main facility. About 1-2 times a year we host the site from that location but we are effectively using it as Active-Passive between the two. Cost of downtime is relatively low given the type of site.
      Again, this is not meant to be generalized to every workload. This is just updating what we have been observing over the past 7-8 years.

    • @squid11160
      @squid11160 3 роки тому +2

      @@ServeTheHomeVideo thanks for answering!

  • @michaelmanus7765
    @michaelmanus7765 Рік тому

    Rather curious about all of this as I can see where co-lo would have hands on and cost advantage over AWS for smaller companies and lower storage/bandwidth needs. There are of course the other side to this - companies would need to hire on more "hands on staff" either in house or 3rd party support, pay for equipment and space as well as no one really to "co share" risk. Any thoughts on this?
    Also , how does one differentiate and find advantage with taxes? Obviously buying equipment has one method of tax advantage as does lease then does AWS type of services become simply operational costs only? I think this is as important as well. Again, any thoughts on this?

  • @dupajasio4801
    @dupajasio4801 3 роки тому +2

    Excellent, so hard to find real numbers. I read one need a Ph.D. to figure out real cloud hosting costs. My Co. is moving all to cloud. Is it cost effective ? I can't tell, I don't see the invoices. I guess the big shots in HQ know better. They are so disconnected from reality on the ground... Again, such a good info.

    • @morosis82
      @morosis82 3 роки тому +2

      It depends. Our AWS costs are relatively low, but we host internal solutions that aren't being hit by a million people, so can take advantage of things like serverless pricing by the 100ms.
      I suspect if you had a relatively constant load that kept your hardware busy it would work out better the other way.

    • @dupajasio4801
      @dupajasio4801 3 роки тому

      @@morosis82 My issue is latency. PLCs talking to ERP and servers running lines. I don't have the luxury of negotiating latency with providers. We're not downtown somewhere. So cloud reliability vs cost vs usability is an issue here. But seeing that cloud is not as great as some think gives me something to think about. Thx for reply.

    • @MarkRose1337
      @MarkRose1337 3 роки тому +3

      @@dupajasio4801 The cloud is a series of tradeoffs. It's a lot of work properly running multiple data centers for redundancy, organizing offsite backups, redundant power, redundant cooling, and so on. It's not just the price of the servers. Also, if you are located in a place with expensive power, it may be outright cheaper to rent cloud. Lastly, EC2 is like a lease, so the business can write it off as an operating expense and not a capital expense. It's fine if your business can tolerate being down on the rare occasion your network connection breaks.
      But if I were running a plant, I'd absolutely have on-site servers for whatever is needed to run it. Time and time again a strategically operated backhoe will chew an important fiber and the internet will have a bad day. Will the plant still operate or shutdown safely if the Internet goes down for 12 hours? Yes, you can get service from multiple providers, but it can be very difficult to know if they're sharing any backhaul.

    • @patricknelson
      @patricknelson 3 роки тому

      @@MarkRose1337 Right tool for the right job, I always say.

  •  3 роки тому +1

    @ServeTheHomeVideo What should the cost estimate for AWS (or another cloud provider) look like to really consider moving from a co-location to a cloud?

  • @patrickbouldinkm5l143
    @patrickbouldinkm5l143 3 роки тому +1

    In your case, if all you're doing is comparing IaaS costs, then ok - the reality is that the other benefits of hyperscale Public cloud have to be considered. Multi-region replication of db - and in fact automatic DR if processes are followed. Horizontal hyperscaling for a bursty situation totally invalidates a non-cloud approach. Finally, when considered native serverless where you're splitting up the apps into microservices and containers totally can make the comparison to enterprise computing moot. In other words, if you compare leasing a fleet of trucks versus purchasing the trucks as the comparison - ok... but, the reality is you need to compare replacing the trucks with long-haul trains, commercial shipping alternatives, or even short term transportation contracts to handle the spikes. Bottom line, Public Cloud offers multiple paradigms that invalidate comparing IaaS only.

  • @_TbT_
    @_TbT_ 3 роки тому +2

    DAMN, AWS is friggin expensive!! Very interesting insight! Would love to hear more about the technical specs of your setup.
    There are also ways between AWS and Colo:
    - there are other cloud providers who are mostly cheaper than AWS (Azure, Google, OVH and many smaller providers, partially country dependent).
    - There are providers who offer not really a „cloud“, but rather standard or individualized VMs, based on KVM with given hardware performance (Digital Ocean, Vultr, Linode and many smaller ones)
    - There are root server providers, renting out standard or individualized root servers (partly with separate internal networking / VLANs, like OVH, 1&1, Hetzner, Strato, and many others, at least in Europe)
    And only the „last step“ then is really buying and colocating hardware.
    For some use cases, a mixture of all of the above can also be useful.
    I privately have had rented root servers for almost 20 years and for some small tasks, I have 2$ VMs up to 12$ VMs. And a home server rack of course. ;)

    • @peterkuria2000
      @peterkuria2000 3 роки тому

      May I know which is your preferred cheap but performant European VPS provider?

    • @_TbT_
      @_TbT_ 3 роки тому +1

      @@peterkuria2000 I have (privately and for my job) several VPS and KVM "root" (still virtualized but with dedicated resources) servers with www.netcup.de/vserver/vps.php (starting at 2,62€) and www.netcup.de/vserver/ respectively. It is a German website, I think they do not have an English version.
      Hetzner ( www.hetzner.com/cloud vor VPS and www.hetzner.com/dedicated-rootserver for root servers) is very affordable as well. No need to colocate imho, albeit possible: www.hetzner.com/colocation
      There is www.scaleway.com/en/elements/ and www.scaleway.com/en/dedibox/ in France.
      There is OVH, which is a HUUUUGE Cloud and server provider, originally from France but now with locations all over the world (also Canada and US) -> www.ovh.ie/ (or suitable country website)
      www.webhostlist.de/ has a quite big selection of VPS and root server providers, depending on the needs.
      Also alternatives: www.vultr.com/ , www.linode.com/ ,
      As I wanted to express: there are SO MANY other providers offering VPS, KVM dedicated servers, root servers, cloud infrastructures! No need to only have a look at the "big 3".

  • @JohnSautter
    @JohnSautter 3 роки тому +2

    maybe I am missing something but don't you have to factor in the costs of the building, lease, insurance, power to name few? With AWS those costs are zero and not free if host things yourself.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      Using colocation includes all of the facility features. You do pay for them, but the costs are effectively spread over multiple organizations. Often when people do cloud v. on-prem they include building all of the redundancy for a few racks. With colocation, you get redundancy and facilities on a full data center scale rather than a small-scale facility with a few racks. Many comparisons you see online use inefficient on-prem build-outs versus AWS.

  • @scottwagner5358
    @scottwagner5358 3 роки тому +1

    There are many "middle ground" Muti-tenant cloud options out there that allow you to run VMs at much lower costs compared to the hyperscalers, but still provide elasticity along with high availability. Were these not considered?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      Hi Scott. These have been part of the analysis since late 2012/ early 2013. This was colo v. AWS costs so bringing in other options to the presentation would not align with either of those. Realistically there are so many hosting options that we could make a 100 hour video and not get through them all.

  • @rudiservo
    @rudiservo 3 роки тому +1

    Thanks for the video, I enjoyed it, although AWS is a big expensive player, there are other like Digital Ocean, Linode, OVH and Hetzner (europe)
    Do you think this still apply to those cheaper solutions?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +1

      Yes, but much less so. Those options are not AWS, the topic of this video though.

    • @rudiservo
      @rudiservo 3 роки тому

      @@ServeTheHomeVideo thanks, yes not the topic of the video, but it would be a nice new video perhaps?
      I to do have some development servers in house instead of the cloud, but cost for companies varies from country to country depending on sysadmin wages and profits, AWS is super expensive for some countries that actually is just cheaper to just hire a hole team and rent a few servers.

  • @DiomedesDominguez
    @DiomedesDominguez 2 роки тому +1

    I know that your current situation, as a company, may be different, but since a lot of the applications you mentioned are mostly web oriented, have you considered to migrate to a lower power architecture, like running a really big cluster of raspberry pi 4? Or am I way out of the park with such idea?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 роки тому +1

      Waaaaaay too slow. People do RPi clusters, but even on the older ThunderX2 32 core/ 128 thread server chips, we could not run STH at the same SLA. Also, RPi's are limited to 8GB each, and that 8GB is very expensive while also having lower reliability local storage and lower performance networking.

  • @setharnold9764
    @setharnold9764 3 роки тому +3

    This was a fun video, I'd like to hear the lessons learned, too :) thanks

  • @diavuno3835
    @diavuno3835 3 роки тому +2

    Definitely share Colo stories, every bump and bruise! 🤣👍

  • @deependrashukla2190
    @deependrashukla2190 3 роки тому

    How do you take care of DR? Why did you want to reserve your instances and how much of reserve instances do you use?

  • @Waaaaaaaaaaaang
    @Waaaaaaaaaaaang 3 роки тому +1

    Hmmm, yeah. The t3a (AMD Epyc) EC2 instances are on average about 7-12% cheaper than their t2/t3 (Intel) instances…when I bring up machines I lean towards AMD whenever possible. The headache here is that if your instances are based on vendor AMIs they might restrict you from using AMD…often for no discernible reasons.

  • @modsoul
    @modsoul 3 роки тому +1

    Would love to hear your colocation stories. Thanks for sharing

  • @engrpiman
    @engrpiman 2 роки тому +1

    More complicated in 2021 when. It takes 6 months to get the servers in because of supply chain issues. I run a server room for most things as it's cheaper long term. We pay 120k yearly for Salesforce and it cost me 50k every 4 years to run the other servers

  • @simonherbert_
    @simonherbert_ 3 роки тому +4

    I would really be interested in your tech stack. Hosting your site sounds like a couple hundred bucks per month, max. In the Cloud, on-demand. I can just assume you’re running an php site without any caching, which would result in your massive server costs.

    • @Lindguardian
      @Lindguardian 2 роки тому

      just accept they can use a calculator for their own needs. egress alone would be "a couple hundred bucks".

  • @technics6215
    @technics6215 Рік тому

    TLDR, some companies go hybrid. By hybrid I mean colocation + cloud. Some services can be moved to in-house server.

  • @Monasucks
    @Monasucks 3 роки тому +7

    Our company runs on old used bought 2011 V0 and V2 with SAS SSDs and 10GbE.. keeps running and running

    • @squid11160
      @squid11160 3 роки тому +1

      Until it doesn't

    • @morosis82
      @morosis82 3 роки тому +4

      @@squid11160 sure, but at the prices you might pay for such a server you can afford to have a couple of warm spares to cover for that eventuality.

    • @pjohnson21211
      @pjohnson21211 3 роки тому +2

      @@squid11160 that is a consideration. BUT in my experience lot of "old" hardware has lots of useful years left. The key consideration in using it is being prepared for it to fail.

    • @jimbarino2
      @jimbarino2 3 роки тому +4

      @@pjohnson21211 RAIS, it's the new hotness! (Redundant Arrays of Inexpensive Servers)

  • @Protoscribe
    @Protoscribe 3 роки тому

    Are you charged per GB on your bandwidth? Here in South Africa, for local, we have something called NAP Africa which is free if you have an AS number and then international IP Transit is per Mbps and is relatively cost-effective even if you don't buy large quantities.

  • @Some.real.human.
    @Some.real.human. 3 роки тому

    It would be interesting if you included the revenue from colocation content in the calculations

  • @ambyjkl
    @ambyjkl 3 роки тому +4

    You want to look into cloudflare cdn which could potentially let you reduce in-ram caching by a lot for basically free.

  • @levifig
    @levifig 2 роки тому

    The thing that I always see missing from these kinds of estimates are the man-hours put into researching, sourcing, assembling, and deploying the hardware in a colo scenario. I know most of us enjoy that stuff anyway, but the reality is, in a professional situation, we all tend to go with big name brands for the service and support, and the same thing applies here: with AWS, if an instance fails, you spin up another and YOLO. One single disaster recovery occurrence in a colo and you're spending multiples of what you saved by choosing colo in the first place…

  • @InsidiousDr9
    @InsidiousDr9 3 роки тому

    It's nice that you shared, I wish more companies would.
    What price do you put on having site redundancy? Your colocation is a single site - you actually mention a major outage.
    Additionally do you actually need reserved instances? I bet your traffic patterns could benefit from elastic scaling.
    The preferring VMs over containers because it works is kind of missing the boat if you want to cost-optimize your operations. You don't have to do them all over night, and there may be some functions that would be good candidates for lambda serverless - which is stupidly cheap.
    I'll give one other intangible in AWS's favor - if you need another set of hands for say a new feature which has hardware requirements, if you are AWS you can use cognito to give credentials and the UI will be familiar to thousands of people already trained up - your set of VMs will have a learning curve for anyone you bring on. Just saying...

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      Included in this is actually renting space and having a second site that is much smaller. We usually migrate older machines there and run it as a passive failover site for if something really bad happens.

  • @InfoNathan
    @InfoNathan 3 роки тому

    What about the cost of security, licensing, and redundancy? Are there any significant gains from that perspective to host in AWS vs. self-host?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      Most of the application-level bits are the same in either case. We realistically have two sites for redundancy and spare capacity/ machines so it is not a big driver there. Again, we still use AWS for some parts of what we do where it makes sense.

  • @billymania11
    @billymania11 3 роки тому +1

    I think AWS had a good run but people are beginning to figure out this cloud thing. AWS was able to monopolize the discussion but thankfully, other minds are beginning to see thru the narrative and form their own conclusions. AWS will have a place for sure but it won't be an automatic first choice as in the past. I think hybrid is more the future.

  • @liryan
    @liryan 3 роки тому

    Not bad and quite good from the cost perspective! But I believe as I went to your website for a look, pretty much it’s a big blog + a forum which is all possible to be done over static site gen + CDN + some serverless tech / on your own servers. The cost would drop down dramatically.

  • @filipersimoes
    @filipersimoes 3 роки тому +1

    Have you looked into AWS lightsail? Since you are still running a workload that relies on VMs, lightsail might be a great option. Lightsail instances have bandwidth included and a fixed pricing model (monthly price). If you need some help, let me know. Cheers

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +1

      Yes. That looked very interesting when we were doing the calculations. When we did the quick tally, for us it would fall around the 1-year partial upfront for monthly recurring but without the upfront commitment. If they offered 64GB VMs we might look at it. My sense is that it is only a matter of time until that is an offering so may include it in the next iteration.

    • @filipersimoes
      @filipersimoes 3 роки тому +1

      @@ServeTheHomeVideo The lightsail instances are based on burstable instance types(t2/t3), so the maximum available today is 32GBs. reinvent is right around the corner so lets see what it brings :)
      If you need further help with AWS related stuff, let me know!

    • @squid11160
      @squid11160 3 роки тому

      @@filipersimoes lightsail is absolutely not a good path forward for an operation of STH scale

    • @filipersimoes
      @filipersimoes 3 роки тому

      @@squid11160 lightsail is perfectly fine for workloads that aren’t cloud native and there is a clear migration path to EC2. If you have additional information on why their workload is not suitable for lightsail, I would like to know more :)

  • @DarkKnight-ree
    @DarkKnight-ree 3 роки тому +1

    LoL, my $5 digital ocean droplets is as far as i have gone.
    I really need to learn more.

  • @josephbeineke1175
    @josephbeineke1175 3 роки тому +1

    You didn't touch on the fact that you could scale the site down to like 1vm at 3am with aws and pay almost nothing. Also, you could use spot instances if you have one VM as the master that's always on and then use the spot instances to scale with demand.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +1

      Hi Joseph. On a normal day 3AM traffic for us is only about 33% lower than at noon. It varies a bit but that is a rule of thumb we use. We are effectively modeling two VMs per major service. I mentioned we run more in colo servers than we are modeling to make it more fair to AWS. This was not done to make AWS look bad. It is what we use as the minimum to keep things running.

    • @_TbT_
      @_TbT_ 3 роки тому +2

      This would be a very US centric view. It is noon always, somewhere around the globe. STH is used and appreciated around the world. Germany greeting here.

  • @tiedye001
    @tiedye001 3 роки тому +12

    Honestly if you couldn't co-locate cheaper than using Amazon I feel almost no one could.

    • @tiedye001
      @tiedye001 3 роки тому

      @@Virtuallynathan1 Read my comment again.

  • @berndeckenfels
    @berndeckenfels 3 роки тому +3

    How many redundancy do you plan for in your colo usually?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +9

      We have more redundancy in terms of VMs on different machines and backups than we are modeling here. Since we have the capacity, the extra VMs are effectively "free" to run. Everything is currently set up in the primary site with multiple VMs, snapshots, and dedicated backup infrastructure on-site. We then have a second "passive" site for backups and enough capacity to run the site using about 20% of the resources instead of 3% in the primary site. Sometimes you see STH served from what is normally the passive site which is in another facility. Given that we have a content site, we can handle lower redundancy/ downtime better than other application classes. I would do something different if I ran a SaaS for medical device endpoints as an example.

  • @minibikemadman
    @minibikemadman 2 роки тому +1

    I love my colo.. 78 a month and good times.

  • @bryanne274
    @bryanne274 3 роки тому +1

    Is the technology you're utilizing the right one for your application?
    Sounds like all of this hassle could have been avoided leveraging other technology platform (backend)

  • @MatthewKanwisher
    @MatthewKanwisher 3 роки тому +1

    A static website shouldn’t use much hardware, largely can offload it to a CDN like cloudflare

  • @excitedbox5705
    @excitedbox5705 3 роки тому

    I think any company that is big enough to have a semi part time or even full time administrator should go with either Co location or Managed Dedicated hosting. We have dedicated servers with about 200 hosting customers and you really just need to run updates. Most of the stuff we do for customers is stuff that a Dev would be doing anyway like creating databases or setting up ftp users and email addresses.

  • @dacresni
    @dacresni 3 роки тому

    would you consider colocating outside of the Bay Area?

  • @Jollyprez
    @Jollyprez 3 роки тому +1

    Jeez - use S3 for images and CloudFlare for most caching....then maybe a linode cluster, can't see how it would come anywhere near what you're talking about.

  • @AnthonyBachour
    @AnthonyBachour 3 роки тому

    Why don’t you cache with a service like cloud flare or fastly?

  • @SteffanPerry
    @SteffanPerry Рік тому

    Can you post some real numbers in terms of data and compute usage? It’s hard to really compare if you are simply talking raw hardware vs raw hardware. How many req/sec are your web servers receiving. When are your peaks and valleys? What sort of load is your db having to handle? You mention data transfer cost, is this cost coming from your web servers directly or are they lightweight apis?
    When you show a 4x cost after reserved pricing with aws that is kind of a red flag… granted at only $1k a month you probably aren’t large enough to make it worth the wile to hire someone to refactor this a bit yet and get a real ROI saving from that.

  • @mt1885
    @mt1885 3 роки тому +1

    FACT --> Depends on who owns your *DATA* - you don't own it when you are their outsourced 'cloud' period.

  • @beardybrowsingdude
    @beardybrowsingdude 3 роки тому

    If every request is being handled directly by an EC2 instance instead of a CDN (aka Cloudfront) then I would imagine that these prices are correct. With a CDN the cost of hosting the site should be drastically less. The CDN can take the files that are served via nginx and cache them at a level that is above the actual EC2 instance, and serve that request to other users without having to go fetch the content again from the instance. This trades EC2 cost for CDN cost, but CDN cost always comes out to be way lower. Properly using Cloudfront with WordPress requires a little bit of effort in the publishing process but would absolutely be worth it.

  • @piranhaofserengheti4878
    @piranhaofserengheti4878 3 роки тому

    I think you forgot to calculate that you need 2 AWS servers if you want uptime as they don't provide network redundancy.

  • @adamjurgens9703
    @adamjurgens9703 3 роки тому

    How do you source your COLO and DR facilities?

  • @d3yuen
    @d3yuen 3 роки тому +1

    Thank you!

  • @calmeilles
    @calmeilles 3 роки тому +2

    I've been in exactly the same situation with now 4 co-location sites. The single notable difference is that I have never used co-lo connectivity, indeed in at least 2 of those sites it has not been available. Instead leased and installed our own fibre. That is an up-front cost and a leasing cost but it is fixed and not usage dependent making cost forecasting that much more reliable.
    The instance cost of AWS looks reasonable enough. It's the data transfer charges that kill you. In one case by so much that setting up in a second AWS location for service resilience would have been absurdly expensive, every byte of data out of one to provision the other incurring a charge.
    A lot of the actual customer serving data costs can be mitigated using content delivery networks and CDNs have other advantages but they are another cost and another component of your stack to be managed.
    Another aspect of any cloud provider that I do not like is the "clever stuff" they offer to make your life easier. The problem is that no two providers offer the same features making it impossible to insure yourself against disaster by using two or more vendors and very difficult to replicate those features on your own hardware.
    If you have a transient need for huge compute, or if you want to pump shed-loads of data _in_ and if you can leverage the unique features of several other things they provide then AWS is brilliant, absolutely awesome in capability and flexibility and it _can_ be a cost saving.
    But there are many other use cases, especially those with heavy data _out_ needs, where AWS, or indeed any cloud provider, is not going to be the most cost effective answer. What other advantages they provide may be worth the additional budget to you. But just be aware that the bottom line argument is not always good.

  • @m_sedziwoj
    @m_sedziwoj 3 роки тому

    Cloud is not only AWS.... so maybe some video about competitors, times is different then few years ago.

    • @m_sedziwoj
      @m_sedziwoj 3 роки тому

      And cost of experts... even making this spreadsheet cost few thousand dollars... I think people are ignoring too much human cost. And not that I think cloud is better, because many times in not dynamic enviroment is not best.
      BTW shouldn't more data from DB be from cached queries?

  • @MrAnthonyP69
    @MrAnthonyP69 3 роки тому

    what do you do for backups? Cloud services are not doing backups for you, they are doing them for their benefit to cover their asses not yours. If you need to do off-site backups add that outbound data transfer to your costs.

  • @prashanthb6521
    @prashanthb6521 3 роки тому +1

    Even I decided against AWS, too expensive for my usecase.

  • @werwars1190
    @werwars1190 3 роки тому

    What about augmented reality and ai supported guidance? implicated in cryptocurrencies and specialized decentralized administrations would change the game. since we are in a developing technology environment it gets better every day.

  • @tiedye001
    @tiedye001 3 роки тому +1

    You have Milan chips... nifty

  • @LethalBB
    @LethalBB 3 роки тому +2

    TBH you're more than over provisioning, you're burning money. Hitting 30% util won't send you to hell.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +1

      It is more of a function of having minimum sets of nodes with the ability to have nodes fail in place and not have interruptions. We are at low CPU and network utilization usually, but memory utilization is usually much higher. We can run fewer nodes with higher utilization, but the desired behavior is that nodes fail and we have enough capacity that we can service them when it is convenient rather than necessitating emergency visits.

  • @pointinpolyhedron
    @pointinpolyhedron 3 роки тому

    Am i missing something ? This simply looks like a blog. There is not much dynamic content at all. Why can't most of it be made static ? I have developed web applications for real companies that make a considerable amount of revenue and all we needed was a heroku instance.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      The WP instance is only a part of what we are hosting, but you are right that part is more static. It effectively is served more statically given how much caching we use so that we are servicing effectively static requests from RAM rather than from NVMe SSDs. WP is easier to manage though.

  • @andrewevans5750
    @andrewevans5750 3 роки тому

    Missing some context I think. Costs on AWS dropped significantly over the last 2-3 years. Running a full data warehouse for < $10k per year. Emailing on serverless at a decent load for $75/month for a marketing and analysis company. ETL for < $50 per month on top of the database. Like $30k for $15k where a WSC is useful or for our needs and RDS for our database (*sorry edit). APIS cost pennies alongside lambda. Couldn't say that in 2017. Not to mention running an entire infrastructure on my own compared to 2-3 people on lightcrest, installing everything else. AWS rocks when you go serverless or have a small ETL need. Have a big boi for my data things. AMD threadripper and a dev data server. It really depends on your need. I run the threadripper 24/7 and it doesn't make sense to run dev databases at more than the cost of electricity on an old gaming pc for $0/mth. It is really nice to offload edge cases for certain applications like highly specialized email used at light to medium load. Aurora has been kicking butt and, when I interviewed with Digital Globe, they found Redshift cheaper 4-5 years ago when it first came out comapred with managing your own dbs at a data center. Great company too but I like to code. They have tons of data. Hybrid cloud is great. Scale ETL systems on EC2 that run non-stop to handle load, hande edge cases on serverless, around the clock with heavy load [for most I have never run a HSC] then in-house starts to make sense. Not sure on the frontend.

  • @Shadowauratechno
    @Shadowauratechno 3 роки тому +1

    Is there any chance you'll make the zipper hoodie in that grey color youre wearing? I like the color a lot but I have trouble with pullover hoodies

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +1

      Teespring does not have the same grey color zip hoodie and does not do a full logo/ image across the zip hoodie. We do have a few zip hoodies now with a smaller front logo. Expect to see those soon in STH videos.

    • @Shadowauratechno
      @Shadowauratechno 3 роки тому

      @@ServeTheHomeVideo that's too bad. I think I'll buy the black one then!

  • @jimiscott
    @jimiscott 3 роки тому

    Why on earth are you not using a reverse proxy such as Cloudflare?

    • @jimbarino2
      @jimbarino2 3 роки тому +2

      Maybe because there aren't a lot of people out there trying to silence a site about server hardware?

  • @tjmarx
    @tjmarx 3 роки тому +1

    Falling from the sky is a better title than falling from the cloud. Falling from the cloud sounds too obvious and tacky.
    Bond Skyfall released in Q4 2012. The actual title of the article should have been "Skyfall: Leaving the cloud" to take advantage of SEO.