and tag to HTML) was co-evolving and certainly on people’s radar, its first draft wasn’t until mid-2007 and its working draft predates MSS by only a few months.\n\nBut folks would quickly come to see that the differences in protocols were actually a strength as a fundamental replacement, not a weakness. Like the problems mentioned above, the reasons why can again fall into three distinct but interrelated benefits that could help solve the broad categories of problems mentioned above (and some others).\n\nWhat can’t it do? (aka Functionality)\n\nIn 1997, HTTP/1.1 was released, and by 1999 it had been updated. By this point, there were many features that would help with the problems mentioned earlier, as long as the solution could take advantage of them. Reusable connections for client requests and “pipelining” (allowing multiple requests to be made before any of them have completed) meant HTTP was getting more efficient. HTTP caching had grown considerably, including beefier cache control mechanisms, which meant both clients and servers could be much smarter about when to send data and when it didn’t need to, and did it in such a way that folks didn’t need to resolve this problem at a “higher level”. This was in part built on top of the evolving set of formalized and well-defined request and response headers, another abstraction, which also included the idea of “content types” to set clear contracts and expectations on “the kind of data” that was being requested and/or provided.\n\nIt also included formalization of things like the Authorization header, which, along with the formalization of HTTPS in 2000 and the evolution of its underlying cryptographic protocol (officially reaching TLS 1.2 just months before MSS was released), provided at least some peace of mind for the most egregious of security concerns and content theft.\n\nSome already-present features wouldn’t be taken advantage of until much later (range requests to make it easy to store a single media file and just ask for “the relevant bits\"). Others would crop up that would help (HTTP/2 PUSH for eager fetching in “Low Latency” extensions to HAS protocols). Still others would simply automatically and transparently benefit HAS, so long as the client and the server supported them (HTTP/3 QUIC to reduce packet size and handshake time as well as further connection pool optimizations).\n\nBeen there, done that (aka Maturity)\n\nMost of the functionality described above was the result of a decade of addressing and improving on generic issues that were no longer just about HTML and web pages. Moreover, there was no reason to think these improvements or this focus would stop anytime soon.\n\nBut it wasn’t just the protocol itself that had matured. Standards and architectures had been evolving around HTTP. Roy Fielding finished his doctoral dissertation laying the groundwork for REST in 2000, and it was fast becoming the way clients talked to servers for all sorts of use cases, including for binary data uploads and downloads (making more specialized protocols like FTP and Gopher less and less common). One of the primary reasons for this was, even if most servers were still stateful, REST (over HTTP) didn’t have to be, which meant it could much more easily scale both vertically and horizontally and could do with far less resource overhead than stateful, session-based connections.\n\nThe infrastructure to take advantage of these protocol details had also matured. Load balancing solutions were at this point especially well-trodden for HTTP, working particularly well with (mostly or entirely) stateless RESTful or REST-like server architectures, which was a much clearer, simpler, and cheaper solution to the sorts of scaling complexities mentioned above.\n\nBut it wasn’t just “on-prem” server solutions that had grown. Akamai’s first commercial CDN offering was released in 1999, almost a decade earlier, and it and other CDNs used that time to build smarter and more efficient distribution architectures (using HTTP) for global reach, regardless of where the content was originally coming from. And while there were additional optimizations that would eventually evolve specific to media, the fundamentals were again generic solutions to generic problems.\n\nEverywhere you look… (aka Ubiquity)\n\nBoth because “the web” was growing fast and because HTTP had already evolved (and would continue to evolve) into something much broader than simply “GET-ing web pages and their related content,” it was also one of the most likely protocols to exist on most devices and across clients and servers. For better or (and?) worse, HTTP was showing clear signs of dominating much of the internet, much like JavaScript is currently showing signs of dominating much of the development space (also for better and worse).\n\nThe number of HTTP servers available was massive compared to “traditional” streaming media servers, existed in multiple languages, and included many free options. This also meant they could be “built on top of,” updated, and improved on for whatever use case. This simply wasn’t available for RTMP servers. Tooling, tutorials, testing frameworks, and many other “supporting actors” were widely available. This also meant that engineers who had a solid and deep grasp of HTTP could be brought up to speed and apply that base knowledge to HAS, unlike protocols like RTMP, which were much more domain specific at a “deep” level.\n\nAnd this same sort of ubiquity was happening on “the client side.” The most common refrain to the “WAP”-enabled phones above was, “this should just work with regular HTTP and HTML,” and that’s what ended up happening with the advent of smartphones like the iPhone and Android. For operating systems, network-centric APIs, and the development environments that corresponded to them, if there was going to be any effort to make “application layer” protocol support easy and convenient, HTTP was going to be on the short list, if not at the top of the list.\n\nAll good things… (aka Conclusion)\n\nAny one of these on their own would help with the various concerns of cost, scale, and reach, but together they re-enforced one another, they practically guaranteed that the benefits of functionality, maturity, and ubiquity would continue, and did so in a way almost entirely independent of its use for streaming media. Since I’m writing this over a decade after even the latest major HAS standard, there’s undoubtedly a bit of hindsight bias here, since we know all of these things ended up being true. But that just means it was, in fact, a good decision, even if only confirmed in hindsight.\n\nOr rather, it was a good decision so long as folks could figure out a way to take advantage of these things. That’s where the other shared details of HAS come in, but that’s also for another blog post.\n\n"}