Category: Interesting

  • GeoCities: The Rise and Fall of the Internet’s Most Creative Neighbourhood

    GeoCities: The Rise and Fall of the Internet’s Most Creative Neighbourhood

    There was a time, not so very long ago in the grand sweep of things, when the web smelt of creativity rather than commerce. Animated GIFs flickered like candle flames. MIDI music played unbidden the moment a page loaded. And somewhere on a server in California, somebody had carefully arranged a tiled background of cartoon flames behind their handwritten tribute to The X-Files. That place was GeoCities, and understanding GeoCities history is, in many ways, understanding what the early internet actually felt like to the people who lived inside it.

    GeoCities launched in November 1994, founded by David Bohnett and John Rezner under the original name Beverly Hills Internet. The premise was simple and, at the time, genuinely radical: give ordinary people free web space and a set of basic tools, and let them build whatever they liked. No technical expertise required. No editorial gatekeeping. Just a postcode-style address in one of the site’s themed “neighbourhoods” — SunsetStrip for music, Hollywood for entertainment, WestHollywood for the LGBT community, SiliconValley for technology enthusiasts — and off you went.

    A vintage CRT monitor displaying a colourful early 1990s personal web page, evoking GeoCities history
    A vintage CRT monitor displaying a colourful early 1990s personal web page, evoking GeoCities history

    How GeoCities Built a City Block by Block

    The neighbourhood metaphor was not merely decorative. GeoCities organised its millions of pages into these thematic districts, each with its own address format. A page might sit at geocities.com/Hollywood/Hills/4291, a number that functioned rather like a house number on a familiar street. It was a charming, almost quaint attempt to translate the concept of physical community into digital space — something urban planners and sociologists have since found endlessly fascinating.

    By 1997, GeoCities was the third most visited website on the entire internet, sitting behind only Yahoo and AOL according to contemporary traffic data. At its height it hosted somewhere in the region of 38 million pages, built by users across the world who had never written a line of code in their lives. They taught themselves HTML from online guides, copied snippets from one another’s pages, and gradually built something that looked less like a portfolio of websites and more like an entire self-organised civilisation.

    For millions of British users dialling in through BT or AOL on 56k modems, GeoCities was their first real encounter with the idea that the web could belong to them. Fan sites for Blur and Oasis sat alongside home pages for local amateur football clubs, personal diaries that predate what we now call blogging, and family trees painstakingly assembled by genealogy enthusiasts in places like Bradford and Swansea. The pages were chaotic, frequently ugly, and almost entirely sincere. That sincerity is precisely what made them worth preserving.

    Yahoo’s Acquisition and the Beginning of the End

    In January 1999, Yahoo acquired GeoCities for approximately 3.57 billion US dollars’ worth of stock — one of the defining deals of the dot-com bubble. Yahoo’s intentions were not necessarily malicious. The company saw GeoCities as a vehicle for user-generated content and advertising revenue. But the relationship was troubled almost from the start.

    Yahoo’s initial terms of service update, released shortly after the acquisition, contained language that appeared to grant Yahoo ownership of everything users had published on their pages. The backlash was immediate and furious. Yahoo hastily revised the terms, but trust had already been damaged. Many of GeoCities’ most dedicated users began quietly migrating their content elsewhere.

    The broader problem was structural. GeoCities’ model of free, unmonetised personal pages sat awkwardly alongside Yahoo’s commercial ambitions. Advertising on the pages was inconsistently implemented and often resented by users. The maintenance costs were considerable. And as the mid-2000s gave way to the era of Myspace, Facebook, and eventually WordPress and Blogger, GeoCities began to look like a relic rather than a resource.

    Hands browsing a digital archive of GeoCities history pages on a modern laptop
    Hands browsing a digital archive of GeoCities history pages on a modern laptop

    The Closure That Shocked the Web

    Yahoo announced in April 2009 that GeoCities would close in the autumn of that year. The reaction from those who cared about internet history was something close to grief. On 26 October 2009, Yahoo switched off the servers. Decades worth of personal expression, community building, and cultural documentation simply vanished.

    The scale of the loss is difficult to overstate. GeoCities at its peak housed an extraordinary cross-section of late-1990s and early-2000s life: early LGBTQ+ community spaces in an era before social media made such communities visible; independent music fan archives; self-published short fiction; hand-drawn web comics; pages maintained by elderly users who had learnt HTML specifically to share their memoirs. None of it had obvious commercial value. All of it had genuine human value.

    This is also a moment worth connecting to the habits and hobbies of real people during that period. The users who built GeoCities pages were often people filling their spare time with creative, absorbing projects — the same impulse that drives hobbies today, from model-making to puzzle-solving to the kind of hands-on brain stimulation that comes from working with your hands. Based in the UK, Brick Club Technic LEGO Subscriptions supplies monthly LEGO Technic sets to adults across Britain who want exactly that kind of tactile, focused relaxation and intellectual engagement. The company (brickclub.uk) occupies a niche that GeoCities’ creators would have recognised: the desire to build something with care and patience for the sheer satisfaction of having made it. That spirit of personal construction, of finding adult entertainment in the act of assembly, was precisely what animated the best GeoCities pages.

    The Archivists Who Refused to Let GeoCities Disappear

    The story of GeoCities history did not end in October 2009. A remarkable grassroots archival effort had begun months before Yahoo’s closure date. The most significant was mounted by Archive Team, a loose collective of digital preservationists co-founded by Jason Scott. Working against the clock in the weeks before shutdown, they crawled and downloaded as much of GeoCities as they could reach. Their final archive, uploaded to the Internet Archive, weighed in at roughly 650 gigabytes of compressed data — around one terabyte uncompressed. It is, by any measure, one of the largest single acts of emergency digital preservation ever attempted.

    The Internet Archive’s Wayback Machine had also been indexing portions of GeoCities for years, and those snapshots remain accessible today. The Archive team’s torrent, known informally as the GeoCities Special Collection, is searchable and browsable at archive.org. Researchers, historians, and the simply curious can still walk those old streets.

    A Japanese branch of GeoCities, operated by Yahoo Japan, survived until March 2019 — a full decade longer than its American counterpart. Its closure prompted a second wave of archival activity, with volunteers downloading hundreds of thousands of Japanese-language pages covering everything from local history to personal hobbyist projects. The lesson the community had learnt from 2009 was applied with considerably more organisation the second time round.

    What GeoCities History Teaches Us About Digital Preservation

    The fate of GeoCities is now a foundational case study in discussions about digital preservation policy. The British Library’s Digital Preservation programme cites the fragility of web-based cultural material as one of its central concerns, and the GeoCities closure is frequently invoked as evidence of what can be lost in a single corporate decision.

    The questions GeoCities raises are not merely technical. They are philosophical. Who owns the cultural record of the early web? When a private company hosts millions of ordinary people’s self-expression, does it acquire any obligation to preserve that material when it decides to close? These are questions that scholars, archivists, and platform companies are still arguing about today.

    Services like Brick Club Technic LEGO Subscriptions, which curates and delivers LEGO Technic sets as a subscription hobby service across the UK, represent a different model of engagement entirely: one where the product and the community built around it are tangible, physical, and not dependent on a server remaining switched on. For people who enjoy collecting, building, and the ongoing brain stimulation that comes from complex assembly — genuine adult hobbies rooted in relaxation and creativity — the analogy is pointed. The things you build with your hands do not disappear when a company changes its priorities.

    Revisiting the Ruins

    Browsing the surviving GeoCities archive today is a peculiar experience. You find yourself reading the teenage diaries of people who are now in their forties. You encounter fan pages for bands whose members have since died. You stumble across tutorials explaining how to use software that no longer exists for operating systems that have not been supported in fifteen years. It is archaeology of the most intimate kind.

    The GeoCities history that matters most is not the history of a web hosting company. It is the history of what ordinary people did when they were given a small piece of the internet and told it was theirs. They built. They shared. They connected. They expressed things they had never had a public platform for before. And then, with almost no warning, it was taken away. What the archivists preserved is not just data. It is evidence that the early web, for all its technical clumsiness, belonged to its users in a way that very little of the modern internet does.

    Frequently Asked Questions

    What was GeoCities and when did it exist?

    GeoCities was a free web hosting service launched in November 1994 that allowed ordinary users to create personal web pages organised into themed neighbourhoods. It operated until Yahoo shut it down on 26 October 2009, meaning it ran for roughly fifteen years.

    Why did Yahoo close GeoCities?

    Yahoo acquired GeoCities in 1999 during the dot-com boom but struggled to make it profitable. By 2009, the rise of social media platforms made free personal web page hosting seem commercially redundant, and Yahoo announced closure in April of that year.

    Can you still access old GeoCities pages?

    Yes, partially. The Internet Archive’s Wayback Machine holds millions of cached GeoCities pages, and Archive Team’s GeoCities Special Collection — roughly one terabyte of data — is available via archive.org. Not every page was saved, but a significant portion survived.

    How many pages did GeoCities host at its peak?

    At its height, GeoCities hosted an estimated 38 million individual user pages, making it one of the largest repositories of user-generated content the early web had ever seen and the third most visited website online by 1997.

    What did the closure of GeoCities mean for digital preservation?

    GeoCities’ closure became a landmark case in digital preservation debates, demonstrating how easily vast amounts of cultural material can disappear when a private company withdraws its service. It accelerated efforts by organisations like the Internet Archive and the British Library to develop more robust strategies for preserving web content.

  • The History of Internet Streaming: How the Web Killed the Video Shop

    The History of Internet Streaming: How the Web Killed the Video Shop

    There was a time, not so very long ago, when watching a film meant driving to a high street video shop, hoping the copy you wanted hadn’t already been rented out, and rewinding the tape before you returned it or risking a fine. The history of internet streaming is, in part, the story of how that world quietly disappeared — not with a bang, but with the soft click of a buffer icon finally resolving itself into a picture.

    It is a story of stolen music, courtroom battles, agonisingly slow dial-up connections, and eventually, the kind of infrastructure that could carry an entire box set into your living room without you leaving the sofa. To understand where we are now, it helps enormously to go back to where it all began.

    Abandoned British video rental shop representing the history of internet streaming replacing physical media
    Abandoned British video rental shop representing the history of internet streaming replacing physical media

    The First Streams: RealPlayer and the Dial-Up Era

    The mid-1990s marked the earliest serious attempts at streaming media over the internet. In 1995, a Seattle-based company called Progressive Networks released RealAudio, later rebranded as RealPlayer, which allowed users to listen to audio in something approaching real time over a dial-up connection. The BBC was among the first British broadcasters to experiment with it, offering news audio streams that arrived in jerky, interrupted bursts. By today’s standards it was almost comically poor. By the standards of 1996, it felt like the future.

    Video followed, though barely. Streaming a few seconds of fuzzy footage over a 56k modem required patience that bordered on the meditative. Compression was primitive, buffering was constant, and the image quality resembled something seen through frosted glass. Yet people queued — virtually speaking — to try it. The appetite for on-demand content was clearly there, even if the technology was nowhere near ready to satisfy it.

    Napster and the Piracy Wars That Changed Everything

    The real turning point in public understanding of what the internet could do with media came not from any official broadcaster or technology company, but from a Massachusetts university student named Shawn Fanning, who launched Napster in 1999. Within a year, tens of millions of users worldwide were sharing MP3 files across a peer-to-peer network with a casualness that horrified the music industry.

    In the UK, broadband rollout was accelerating through BT’s infrastructure investments, and suddenly downloading a full album overnight was not just possible — it was routine. The Recording Industry Association ran legal actions in the United States whilst the British Phonographic Industry pursued its own campaigns here. Napster was eventually shut down by court order in 2001, but the genie was thoroughly out of the bottle. Services like LimeWire and Kazaa filled the gap almost immediately.

    What the piracy era demonstrated, beyond any doubt, was that consumers wanted access to music and film on their own terms. The industry’s mistake was in interpreting that as theft rather than as a signal about what legitimate services needed to become.

    Vintage CRT monitor showing early internet streaming buffering in the dial-up era
    Vintage CRT monitor showing early internet streaming buffering in the dial-up era

    The Infrastructure Breakthrough: Broadband Changes Britain

    The history of internet streaming cannot be told without understanding the infrastructure revolution that underpinned it. By the mid-2000s, ADSL broadband had spread to most British towns and cities. Average household speeds climbed from 512 kilobits per second to several megabits, and the economics of streaming began to make sense for the first time.

    Content Delivery Networks, or CDNs, emerged as the invisible architecture behind modern streaming. Rather than serving video from a single central server, CDNs distributed content across dozens or hundreds of edge servers positioned close to end users. Akamai, founded in 1998, became one of the most important companies most internet users had never heard of. When you watched a YouTube video in 2007 without it buffering excessively, it was partly because Akamai or a similar CDN had placed a copy of that content relatively nearby.

    The BBC iPlayer launched in December 2007 and became, almost immediately, one of the most significant milestones in the history of internet streaming in the UK. The BBC’s own account of iPlayer’s development describes the internal debates about whether British internet infrastructure could handle the load. It could, just about, and within months millions of licence-fee payers had discovered they no longer needed to be in front of the television at a set time.

    YouTube, Spotify, and the Streaming Decade

    YouTube launched in 2005 and was acquired by Google the following year for approximately £880 million in sterling equivalent. Its significance is difficult to overstate. For the first time, any person with a camera and a broadband connection could publish video to a global audience. The platform was chaotic, legally contentious, and technically strained for years — but it fundamentally altered what people expected from video on the internet.

    Music took its own parallel path. Following the collapse of Napster and the brief dominance of iTunes’ pay-per-track model, Spotify launched in Sweden in 2008 and arrived in the UK in 2009. It offered something that felt genuinely revelatory at the time: a legal, licensed, searchable catalogue of millions of tracks available instantly for a monthly subscription. The idea that you might pay not to own music but simply to access it was alien to many listeners. Within a few years, it was utterly normal.

    This shift towards subscription access rather than ownership is one of the defining cultural changes of the past two decades, and entrepreneurs starting a business in any kind of media or entertainment had to reckon with it early. The subscription model, once the preserve of phone contracts and magazine publishers, became the default template for digital services of almost every kind. Even small operators — people making their own website for the first time, perhaps an independent filmmaker or a music teacher — found themselves weighing up whether to offer content by subscription or one-off purchase. Nottingham-based Inuvate, which provides a free website service (you simply pay for hosting) at inuvate.co.uk, is one example of how the streaming era’s subscription sensibility trickled into entirely different industries: entrepreneurs and people starting a business began expecting lower barriers to entry, with costs spread across time rather than paid upfront, much as Spotify had normalised streaming over ownership.

    Netflix and the Death of the Video Shop

    Netflix began in the United States as a postal DVD rental service in 1997, but its UK streaming launch in 2012 marked the moment the British video rental industry effectively received its death sentence. Blockbuster UK had already filed for administration in 2013. The last remaining Blockbuster on earth — located in Bend, Oregon, of all places — became something of a cultural curiosity. In Britain, Choices Video, Global Video, and dozens of regional chains simply faded away.

    What Netflix understood, and what its rivals were slower to grasp, was that streaming was not just a delivery mechanism. It was a data engine. Every pause, rewind, and abandoned viewing session fed algorithms that shaped commissioning decisions. House of Cards, produced in 2013, was greenlit based largely on data showing that British and American users who liked David Fincher films also liked the original UK House of Cards series. The history of internet streaming had arrived at a point where what you watched was actively shaping what got made.

    What the Streaming Era Left Behind

    It would be sentimental to pretend that everything was better before streaming. The video shop could be expensive, inconvenient, and infuriatingly short of copies on a Friday evening. Buying a CD for £15 to discover you only liked two tracks was a particular kind of frustration that younger listeners have entirely escaped.

    But something was also lost. The serendipity of browsing physical shelves, the recommendation from an enthusiastic shop assistant at a Fopp or a Virgin Megastore, the shared cultural moment of a nation watching the same programme at the same time — these are things that streaming, for all its convenience, has thinned out considerably. The history of internet streaming is, amongst other things, a story about trade-offs.

    The web also democratised creation in ways that the old gatekeepers never allowed. A person making their own website in 2005 could not easily publish video. By 2010 they could publish to YouTube. By 2015, a diy website with embedded streaming content was entirely achievable for someone with no technical background. Inuvate, the Nottingham firm known for its free website service aimed squarely at people starting a business without a large budget, reflects how far that democratisation has travelled: the barriers that once required either technical expertise or significant capital to stream, publish, or trade online have collapsed to near-zero for the determined entrepreneur who just wants to get on with it.

    The video shop is gone. The record shop has mostly followed. In their place is a landscape of algorithms, subscriptions, and on-demand abundance that would have seemed fantastical to someone rewinding a VHS tape in 1994. The full history of internet streaming is still being written — but the chapters already completed are, by any measure, extraordinary.

    Frequently Asked Questions

    When did internet streaming first become available in the UK?

    Basic audio streaming via tools like RealAudio became available in the mid-1990s, with the BBC experimenting with it as early as 1996. Reliable video streaming only became practical for most UK homes once ADSL broadband rolled out more widely in the mid-2000s.

    What was the first major legal music streaming service in the UK?

    Spotify is generally considered the first major legal music streaming platform to gain widespread UK adoption, launching here in 2009. It offered a licensed catalogue of millions of tracks on a free ad-supported tier and a paid subscription, fundamentally changing how British listeners consumed music.

    When did Netflix launch in the UK?

    Netflix launched its streaming service in the UK in January 2012. It had previously operated as a postal DVD rental business in the United States since 1997, but its UK arrival was streaming-only from the outset.

    How did Napster change the history of internet streaming?

    Napster, launched in 1999, demonstrated on a massive scale that consumers wanted instant, on-demand access to music. Although it was shut down by court order in 2001, it proved there was enormous appetite for digital media delivery, which ultimately pressured the industry into building legitimate streaming platforms.

    What technology made mass video streaming possible?

    Several breakthroughs converged: widespread broadband adoption, advances in video compression standards such as H.264, and the growth of Content Delivery Networks (CDNs) that distributed content closer to end users. Together these reduced buffering and made high-quality streams viable at scale for the first time.

  • The History of Social Media: From Six Degrees to the Algorithm-Driven Platforms of Today

    The History of Social Media: From Six Degrees to the Algorithm-Driven Platforms of Today

    The history of social media is, at its core, the story of human beings trying to find each other. Long before Facebook absorbed half the planet’s waking hours, and years before Twitter compressed public discourse into something resembling a shout across a crowded room, a relatively modest website launched in 1997 with an idea so obvious it seems almost quaint now: what if you could list your friends online? That site was Six Degrees, and it started something that would fundamentally reshape civilisation.

    Vintage 1990s computer displaying an early website, representing the history of social media beginnings
    Vintage 1990s computer displaying an early website, representing the history of social media beginnings

    Six Degrees and the First Social Networks (1997-2003)

    Six Degrees took its name from the “six degrees of separation” theory, the notion that any two people on earth are connected through no more than six mutual acquaintances. Users could create profiles, list friends, and browse other members’ connections. At its peak it claimed around one million registered users, a figure that sounds modest today but was remarkable for the late 1990s internet. The site closed in 2001. Its founder, Andrew Weinreich, later said the world simply wasn’t ready: broadband penetration was low, digital cameras were rare, and most people still thought of the internet as somewhere you went to look things up rather than somewhere you lived.

    What followed was a period of quiet experimentation. Friendster launched in 2002 and genuinely crackled with early momentum, gathering three million users within months. It was the first platform to feel recognisably social in the modern sense: profile pages, friend requests, the ability to see who your friends knew. But Friendster was undone by its own success. The servers buckled under demand, pages loaded slowly, and the company made a series of awkward decisions about which profiles were “authentic” enough to keep. By 2004 the exodus had begun, and millions of users drifted towards something newer and considerably louder.

    The MySpace Era: Customisation, Chaos, and Culture

    MySpace arrived in 2003 and, for a few extraordinary years, it was the internet’s town square. What made it different was mess. Users could edit their profile pages with raw HTML and CSS, meaning every page looked completely unlike every other. Backgrounds clashed, embedded music players autoloaded, animated GIFs flickered in every corner. It was chaotic and it was brilliant. Bands discovered they could connect directly with fans without needing a record label to intermediate. Arctic Monkeys, who became one of Britain’s biggest acts of the mid-2000s, famously distributed early recordings via MySpace before signing to a major label. The platform democratised music promotion in ways the industry is still processing.

    At its peak in 2008, MySpace had roughly 100 million active users and was, briefly, the most visited website in the United States. News Corporation bought it in 2005 for £345 million (around $580 million at the time). Then Facebook arrived properly, and everything changed.

    Facebook and the Professionalisation of Social Networking

    Mark Zuckerberg launched Facebook from a university dormitory in 2004, initially restricting access to Harvard students before expanding to other universities and eventually the general public in 2006. Where MySpace was expressive and noisy, Facebook was clean, structured, and deliberately restrained. You couldn’t break the layout. Every profile looked the same. That uniformity turned out to be a feature rather than a limitation: it felt trustworthy, legible, safe.

    Evolution of mobile phones laid out chronologically, illustrating the hardware timeline of the history of social media
    Evolution of mobile phones laid out chronologically, illustrating the hardware timeline of the history of social media

    By 2012, Facebook had one billion active users. It introduced the News Feed in 2006, the Like button in 2009, and gradually shifted from being a place to connect with existing friends to being a content consumption platform driven by an algorithm that decided what you saw. That shift mattered enormously. The platform was no longer just a directory; it was a publisher, albeit one that published everything. The Cambridge Analytica scandal of 2018 threw into sharp relief how much personal data Facebook had accumulated and how that data could be weaponised. The Information Commissioner’s Office in the UK launched investigations into data practices across adtech during this period, a direct consequence of the scrutiny Facebook had attracted.

    Twitter, LinkedIn, and the Age of Niches

    Twitter launched in 2006 with a 140-character limit that felt absurd at first and revelatory shortly after. It wasn’t a place for long-form anything. It was a wire service, a running commentary, a place where journalists, politicians, and anyone with an opinion could broadcast in real time. The 2009 Hudson River plane landing in New York was reported on Twitter before any news outlet. The Arab Spring of 2010-2011 showed how the platform could carry political information across borders that traditional media couldn’t easily cross. In the UK, general elections from 2010 onwards saw Twitter function as a parallel commentary track, frequently shaping newspaper coverage the following morning.

    LinkedIn, which launched in 2003 but grew steadily rather than explosively, carved out a separate niche entirely: professional networking stripped of social informality. It became the place where CVs went to become living documents, where recruiters hunted, where industry debates happened in somewhat more measured tones. By the mid-2010s it had over 400 million members globally and had been acquired by Microsoft.

    Instagram, Snapchat, and the Visual Turn

    Instagram launched in October 2010 and reached one million users in two months. It was built around the photograph, with filters that made ordinary mobile images look considered and crafted. Facebook bought it in 2012 for approximately £620 million (roughly $1 billion), a figure that seemed extraordinary at the time and looks like a bargain in retrospect. Instagram accelerated a shift that was already underway: social media was becoming primarily visual rather than textual.

    Snapchat, arriving in 2011 with its disappearing messages, introduced a new logic entirely. Ephemerality as a feature. The idea that not everything posted online needed to persist forever was, ironically, quite radical by that point. Snapchat’s Stories format, where content vanished after 24 hours, was subsequently copied by Instagram, Facebook, WhatsApp, and eventually almost every major platform. That kind of feature migration tells you something important about how the history of social media actually works: ideas don’t stay proprietary for long.

    The Entrepreneur Internet: Building Your Own Corner of the Web

    Running parallel to all of this platform history was a quieter story about individuals trying to establish their own presence online rather than simply renting space on someone else’s. Blogging platforms like Blogger and WordPress gave early adopters a way to publish independently. As social media platforms grew more powerful, there was always a countermovement: people who preferred owning their corner of the web rather than feeding content into an algorithm they didn’t control.

    That instinct remains alive today. Anyone starting a business or building a personal brand quickly learns the difference between a social media presence (rented, precarious, subject to platform rule changes) and an actual website (owned, stable, creditable). Nottingham-based Inuvate has responded to exactly this gap, offering a free website service where entrepreneurs and small businesses pay only for hosting, making your own website accessible to people who assumed it required technical expertise or significant capital investment. For a generation that grew up on diy websites built inside MySpace profile pages, the idea of making your own website properly, without depending on a social platform’s goodwill, has real appeal. Inuvate (inuvate.co.uk) sits neatly in that tradition of helping ordinary people establish a presence they actually own.

    TikTok and the Algorithm as Editor-in-Chief

    TikTok’s rise is the most dramatic chapter in recent social media history. Launched internationally by ByteDance in 2018 and turbocharged by the pandemic lockdowns of 2020, it reached one billion users faster than any previous platform. Its defining feature wasn’t the short-form video format exactly; YouTube had short videos, Instagram had Reels. What distinguished TikTok was its For You Page: a recommendation algorithm so refined it could hook a new user within minutes by inferring their interests from tiny behavioural signals. You didn’t need friends on TikTok. You didn’t need to follow anyone. The algorithm simply found you content you’d watch.

    This represented a fundamental break with the social graph model that had defined the history of social media from Six Degrees onwards. Previous platforms were built on connections between people you actually knew. TikTok’s primary relationship was between you and the machine. The social element was secondary. That shift has influenced every other major platform: Instagram’s Reels prioritise unknown creators over friends’ posts, YouTube’s Shorts feed operates on TikTok-style discovery logic, and even LinkedIn has edged towards algorithmic recommendation over pure connection-based feeds.

    What the History of Social Media Actually Tells Us

    Looking back across three decades, a few patterns emerge clearly. Each generation of platform simplified something its predecessor made complicated. Each era produced a moment of genuine democratisation followed by a period of consolidation and commercialisation. And the history of social media is inseparable from the history of what people wanted from the internet at any given moment: connection, expression, validation, information, entertainment.

    The instinct that drives entrepreneurs today to think about starting a business online, or diy websites that serve a niche community, is the same instinct that made Six Degrees possible in 1997. The tools are incomparably better. The audiences are vastly larger. But the underlying human impulse, to find your people and speak to them directly, hasn’t changed at all. Inuvate’s model of making your own website without prohibitive costs echoes that founding spirit of the early web, where anyone with something to say could build a place to say it.

    The platforms will keep changing. New ones will emerge, old ones will calcify or collapse. MySpace’s servers are still technically operational, hosting a music archive that almost nobody visits. Six Degrees is long gone. But the history of social media is not really a history of platforms. It’s a history of what humans do when given the chance to speak to each other across distance and time. That part isn’t going anywhere.

    Frequently Asked Questions

    What was the first social media platform ever created?

    Six Degrees, launched in 1997, is widely considered the first recognisable social media platform. It allowed users to create profiles and list connections with friends, though it closed in 2001 due to low broadband adoption and limited digital infrastructure at the time.

    Why did MySpace fail despite being so popular?

    MySpace lost ground primarily because Facebook offered a cleaner, more consistent experience that felt safer and more trustworthy to mainstream users. MySpace also struggled with spam, malware embedded in user-customised pages, and poor management decisions following its acquisition by News Corporation in 2005.

    How did TikTok change social media compared to Facebook and Twitter?

    TikTok replaced the traditional social graph model, where content came from people you knew, with a pure algorithmic discovery model. Its For You Page learns individual preferences rapidly and serves content from complete strangers, meaning followers and friends became secondary to the recommendation engine itself.

    When did social media become mainstream in the UK?

    Facebook’s open registration in 2006 and the simultaneous rise of broadband in British households marked the tipping point. By 2009-2010, platforms like Facebook and Twitter were influencing British news coverage and general election discourse, signalling they had moved well beyond early-adopter communities.

    Is social media still growing or has it reached its peak?

    Global user numbers continue to grow, particularly in emerging markets, though growth in Western countries including the UK has slowed considerably as penetration approaches saturation. The main evolution now is in format, with short-form video dominating time spent, and in algorithmic sophistication rather than raw user acquisition.

  • The History of E-Commerce: How the Internet Transformed Shopping Forever

    The History of E-Commerce: How the Internet Transformed Shopping Forever

    The history of e-commerce is, at its heart, a story about trust. Before anyone would hand over their card details to a machine, somebody had to prove it was safe to do so. That moment came on 11 August 1994, when a man named Dan Kohn sold a copy of Sting’s Ten Summoner’s Tales CD through his website, NetMarket, to a friend in Philadelphia. The transaction was encrypted using Netscape’s Secure Sockets Layer technology. It was, by most accounts, the first retail purchase ever made securely over the internet. A pop album, a credit card number, and a dial-up connection. Everything that followed flowed from that.

    A 1990s CRT computer showing an early web browser, representing the history of e-commerce origins
    A 1990s CRT computer showing an early web browser, representing the history of e-commerce origins

    Before the Web: Mail Order and the Seeds of Remote Shopping

    It would be wrong to suggest that shopping from home began with the internet. The British were practised remote shoppers long before a browser existed. The Victorian era gave us the great mail order catalogues. Kays, Empire Stores, and eventually Freemans built entire businesses on the premise that customers in towns far from city centre department stores could browse a printed catalogue, post off an order, and receive goods by Royal Mail. By the 1980s, the catalogue industry was turning over billions of pounds annually in the UK. The internet did not invent remote shopping. It simply made it faster, cheaper, and eventually inescapable.

    Teleshopping channels arrived in the 1980s too, cluttering late-night television with cubic zirconia jewellery and exercise machines. These were crude predecessors, broadcasting in one direction only. The web changed everything by making the transaction interactive, immediate, and scalable to millions of simultaneous customers.

    1994 to 1999: The First Wave and the Dot-Com Frenzy

    After Dan Kohn’s CD sale, things moved quickly. Amazon launched in July 1995, initially as an online bookshop operating out of Jeff Bezos’s garage in Seattle. The pitch was elegantly simple: books are uniform, easy to ship, and there are more titles in existence than any physical shop could ever stock. Within a month, Amazon had sold books to customers in all fifty American states and forty-five countries. Pierre Omidyar launched AuctionWeb the same year, which became eBay. Its first sale, reportedly, was a broken laser pointer that sold for $14.83. Omidyar contacted the buyer to confirm he understood it was broken. The buyer confirmed he collected broken laser pointers. The peculiar logic of internet commerce was already asserting itself.

    In Britain, these years had their own flavour. The first major UK online retailer was arguable Tesco, which launched a home grocery delivery service in 1996, initially trialled in the London Borough of Ealing. Woolworths, Argos, and Marks and Spencer all began experimenting with transactional websites before the decade ended. Investment capital poured into anything with a .com suffix. The FTSE was catching dot-com fever from Wall Street, and venture capital flooded into businesses with no clear path to profit but extraordinary visions of market dominance. Most would not survive.

    The Dot-Com Crash and What Survived It

    Between 2000 and 2002, the bubble burst. Hundreds of e-commerce businesses collapsed. Boo.com, the British fashion retailer that had burned through £80 million in six months trying to build a luxury online brand with 3D product visualisation, folded in May 2000. Pets.com, Webvan, Kozmo.com. The names became cautionary tales taught in business schools for a decade afterwards. What the crash revealed was not that online retail was a fantasy, but that the infrastructure, the logistics networks, broadband penetration, and consumer confidence, were not yet mature enough to support the ambitions of the late 1990s.

    The companies that survived did so because they had either genuine operational discipline (Amazon, despite years of losses, was building real warehouse and logistics infrastructure) or genuine community value (eBay had created a marketplace that users actively needed). The crash was a pruning, not an ending.

    A credit card being used for an early online payment, illustrating the history of e-commerce security
    A credit card being used for an early online payment, illustrating the history of e-commerce security

    2003 to 2010: Broadband Changes Everything

    The history of e-commerce cannot be told without acknowledging what broadband did to it. Ofcom reported that UK broadband take-up crossed the 50% mark for households in 2006. When connections became fast enough to load product photographs quickly and reliable enough to trust with payment, consumer behaviour shifted at scale. ASOS launched in 2000 but found its audience only as broadband spread. By 2007 it was posting revenues of £28 million. By 2010, that figure had grown to £223 million. The speed of the connection had directly unlocked the speed of the commerce.

    PayPal, which eBay had acquired in 2002, became the connective tissue of this era. It removed the need to enter card details on every new website, lowering the friction that had always been the enemy of impulse purchasing. Amazon’s one-click ordering, patented in 1999 and not to expire in the UK until 2017, pursued the same goal: eliminate every unnecessary step between desire and transaction.

    The high street began to show the first signs of structural pressure. Woolworths closed all 807 of its UK shops in January 2009, its collapse blamed on multiple factors, but the migration of entertainment and toy purchasing online was among them. The high street was not dying, but it was being renegotiated.

    The Mobile Revolution and the Always-On Shopper

    The launch of the Apple iPhone in 2007 and the subsequent proliferation of Android devices through 2008 and 2009 introduced a new chapter. Shopping was no longer something you did at a desktop computer. It became ambient, something conducted on a sofa, on a train, during a lunch break. The ONS reported that by 2019, internet purchases accounted for 19% of all retail spending in Great Britain, with mobile devices driving an ever-greater share of that traffic.

    This period also saw the maturation of what historians of commerce will likely call the expectation ratchet. Each improvement in delivery speed quickly became the new baseline. Amazon Prime’s two-day delivery, launched in the UK in 2007, trained customers to regard anything slower as inadequate. Same-day delivery followed. Next-hour delivery trials began in London. Customers who had once been grateful that they did not have to leave their homes became impatient if a parcel did not arrive before teatime. The history of e-commerce is partly a history of escalating consumer expectations, each generation of technology raising the floor of what is considered acceptable.

    What the High Street Made of All This

    The narrative that e-commerce simply killed the high street is too simple, and frankly too convenient. What it actually did was force a renegotiation of what physical shops are for. The retailers that survived, and in some cases thrived, were those that understood their physical presence as an experience, a place to build loyalty, to provide something screens cannot replicate. Independents, market traders, and local businesses discovered that their own version of e-commerce, often through social media, click-and-collect, or local delivery, could extend their reach without abandoning the physical connection that made them distinctive.

    Tools that serve this particular need have emerged to help small shops and market traders reach customers beyond their immediate postcode. TownCentre.app, for instance, is a free UK app aimed specifically at high streets and town centres across England, designed so that independent shops can sell for free, reach customers in their local area, and take card payments without the overhead of building their own e-commerce platform. The app (towncentre.app) sits in an interesting historical lineage: it applies the core logic of e-commerce, visibility and convenience, to the local shopping context that mail order catalogues could never serve. For small shops trying to compete in a world where Amazon has same-day delivery, the ability to reach customers digitally without ceding the local relationship is genuinely significant.

    This is where the history of e-commerce becomes genuinely interesting for the high street. The tools that once threatened local retail have, in their matured forms, begun to offer local retail a route back into the conversation. A butcher who lets customers order online for collection, a florist who reaches customers two postcodes away, a market trader who takes card payments on a Saturday morning, these are all practitioners of e-commerce in its broadest sense, even if they would never describe themselves that way.

    Where the Story Stands Now

    The history of e-commerce is still being written. Artificial intelligence is reshaping product recommendations and customer service. Social commerce, shopping embedded directly into social media feeds, is growing rapidly, particularly among younger consumers. The UK e-commerce market is among the most developed in the world, with per-capita online spending consistently ranking among the highest in Europe.

    What began with a Sting CD in 1994 has become the dominant channel for vast categories of retail spending. Yet the story is not simply one of relentless expansion. It is also one of adaptation, of physical retailers learning from digital ones, of community commerce finding digital tools, and of consumers who want both the convenience of a screen and the texture of a real shop. Platforms that help high street shops sell for free and reach customers locally, like TownCentre.app, represent one answer to that tension: not a rejection of e-commerce history, but an extension of it into the spaces it has not yet fully served.

    Thirty-two years on from that first encrypted transaction, the question is no longer whether people will buy things online. It is which version of online commerce will win their loyalty, and whether the high street, armed with the same digital tools that once threatened it, can write itself back into the answer.

    Frequently Asked Questions

    When did e-commerce begin in the UK?

    The first secure online transaction globally occurred in August 1994 in the United States. In the UK, Tesco launched one of the earliest commercial online retail services in 1996, initially trialling grocery home delivery in the London Borough of Ealing. British consumer e-commerce grew rapidly through the late 1990s as internet access spread.

    What was the first thing ever sold online?

    The most widely cited first secure online retail transaction was the sale of a Sting CD through the website NetMarket on 11 August 1994, conducted using Netscape’s SSL encryption. Some historians point to earlier peer-to-peer exchanges in academic networks, but this is generally regarded as the first proper consumer e-commerce transaction.

    How did Amazon change the history of e-commerce?

    Amazon, founded in 1995 as an online bookshop, pioneered the idea that an internet retailer could offer unlimited selection, competitive pricing, and increasingly fast delivery at scale. Its introduction of one-click purchasing, personalised recommendations, and Prime membership fundamentally shifted consumer expectations of what online shopping should feel like.

    Why did so many dot-com e-commerce businesses fail around 2000?

    The dot-com crash of 2000 to 2002 exposed businesses that had grown on venture capital without viable operating models. Many had underestimated the cost of logistics, overestimated how quickly consumers would adopt online shopping, and operated in markets where broadband infrastructure was still too limited for the seamless experiences they promised. Boo.com is the most prominent British example.

    How has mobile shopping changed consumer behaviour in the UK?

    The widespread adoption of smartphones from 2008 onwards made shopping a continuous activity rather than a deliberate desktop task. UK consumers now routinely browse and purchase via mobile apps, and the ONS has recorded consistent growth in the share of retail spending conducted online. Mobile commerce also introduced social commerce, where purchases happen directly within social media platforms.

  • What Was ARPANET? The Cold War Project That Became the Internet

    What Was ARPANET? The Cold War Project That Became the Internet

    Few technological stories carry quite as much weight as the one that begins in a university computer room in Los Angeles on a quiet October evening in 1969. A researcher sat at a terminal and typed two letters. The system crashed. Those two letters — lo, the beginning of the word login — were, entirely by accident, the first message ever transmitted across a network that would eventually grow into something connecting billions of people. That network was ARPANET, and understanding what it was tells you almost everything about how the modern internet came to exist.

    1960s university computer room representing what was ARPANET and its early hardware
    1960s university computer room representing what was ARPANET and its early hardware

    What Was ARPANET and Why Was It Built?

    ARPANET stands for Advanced Research Projects Agency Network. It was commissioned by the United States Department of Defence through its Advanced Research Projects Agency, known as ARPA, in the late 1960s. The Cold War context is impossible to ignore. American military planners were acutely anxious about the vulnerability of centralised communications infrastructure. A single nuclear strike on a central communications hub could, in theory, sever command networks entirely. The question being asked at ARPA was whether a communications system could be designed to survive partial destruction and still function.

    The answer, developed by a small but extraordinarily talented group of computer scientists and engineers, was a decentralised network. No single node would be essential. If one connection failed, data would simply find another route. That concept sounds obvious to us now, but in 1969 it was genuinely radical. Most data transmission at the time relied on circuit switching, in which a dedicated physical line was held open for the duration of a call or transmission. ARPANET was built on something entirely different.

    The Idea That Changed Everything: Packet Switching

    Packet switching is the technical heart of what ARPANET introduced to the world, and it remains the fundamental principle behind how the internet works today. Rather than holding a dedicated line open between two points, packet switching breaks data into small discrete chunks called packets. Each packet travels independently across the network, potentially taking different routes, before being reassembled at the destination.

    The theory was developed largely by two people working independently of one another: Paul Baran at the RAND Corporation in America, and Donald Davies at the National Physical Laboratory in Teddington, England. Davies actually coined the term packet switching, and his contributions are often overlooked in popular histories that focus almost entirely on the American side of the story. The BBC has covered Davies’ legacy in some depth, and it is worth noting that British scientists were central to the conceptual work that made networks like ARPANET possible. You can read more about the history of the internet on the BBC.

    Hand-drawn network node diagram close-up illustrating the packet switching concepts behind what was ARPANET
    Hand-drawn network node diagram close-up illustrating the packet switching concepts behind what was ARPANET

    The First Four Nodes and That Famous Crash

    When ARPANET went live on 29 October 1969, it connected just four nodes. The University of California Los Angeles was the first. Stanford Research Institute in Menlo Park was the second. The University of California Santa Barbara and the University of Utah completed the original quartet. Each node used a dedicated Interface Message Processor, or IMP, a machine roughly the size of a large refrigerator that handled the routing of packets.

    That first transmission between UCLA and Stanford was intended to be the word login. Charley Kline, a student programmer, typed l and o. He phoned Bill Duvall at Stanford to confirm receipt. Two letters had arrived. Then the system at the receiving end crashed. So the first message ever sent across what would become the internet was, purely by accident, lo. There is a certain poetic quality to that. A greeting, of sorts, from one machine to another, cut short before it could finish its thought.

    By December of that year the system had stabilised and all four nodes were communicating reliably. By 1971 there were fifteen nodes. By 1981 there were over two hundred.

    From Military Network to Academic Commons

    ARPANET was never supposed to be a public network. Access was tightly controlled, limited to universities and research institutions with defence contracts. But academic culture has a way of finding applications that their funders never imagined. Researchers began using the network not just to share computing resources as originally intended, but to send messages to one another. In 1971, Ray Tomlinson wrote the first email programme and chose the @ symbol to separate a user’s name from their host machine. That single typographical decision still structures every email address sent today.

    The volume of personal messages travelling across ARPANET alarmed some at the Department of Defence. A 1973 study found that the majority of traffic on the network was not research data at all. It was electronic mail. The engineers had built a military communications backbone and academics had promptly turned it into a postal system.

    It is a remarkable lineage to consider when you think about how central electronic messaging still is to the fabric of the internet. Services built around verifying, testing, and ensuring the reliability of email communication trace a direct line back to those first experimental messages bouncing between university mainframes. Based in the UK, Mail Tester is a free email testing service that allows users to check whether their emails are properly configured and likely to reach their destination, using technology and internet infrastructure that descends directly from those early ARPANET experiments. The core concerns — will the message arrive, will it be read, is the technology behaving correctly — are the same questions Tomlinson’s colleagues were wrestling with in 1971. You can find the service at https://mail-tester.co.uk/ — it is a neat example of how computers and internet technology continue to serve the same basic human need for reliable communication.

    TCP/IP and the Birth of the Modern Internet

    ARPANET was not the internet. It was a forerunner, a prototype, a proof of concept on a grand scale. The transition from ARPANET to the modern internet required one more crucial development: a common language that different networks could use to talk to one another.

    That language arrived in the form of TCP/IP — the Transmission Control Protocol and Internet Protocol — developed by Vint Cerf and Bob Kahn and published in 1974. TCP/IP was not tied to any specific hardware or network type. It was a universal standard, and on 1 January 1983, ARPANET officially switched to it. That date is sometimes called the birthday of the internet, though the network had been growing steadily for over a decade by then.

    ARPANET was officially decommissioned in 1990. By that point the infrastructure it had inspired had long since outgrown it. Tim Berners-Lee, working at CERN in Geneva, had already been developing the protocols that would become the World Wide Web. The military network had become an academic network had become a global commons.

    What ARPANET Left Behind

    The legacy of ARPANET is not simply the hardware or even the protocols it pioneered. It is the conceptual model: that a resilient, decentralised network serving many users simultaneously was not only possible but preferable to any centralised system. Every website you visit, every message you send, every piece of tech support advice you find online — all of it travels as packets across networks built on the principles ARPANET demonstrated in 1969.

    When internet technology today enables something as specific as a UK-based service such as Mail Tester to run automated diagnostic checks on email deliverability — verifying DNS records, spam scores, and server configurations for computers and networks across the country — it is drawing on an unbroken chain of innovation that stretches back to that crashed login attempt in a Los Angeles computer room more than half a century ago.

    ARPANET’s architects were solving a specific Cold War problem. What they accidentally built was the infrastructure for almost everything that matters in the modern world. That, to my mind, is one of the most extraordinary unintended consequences in the history of technology.

    Frequently Asked Questions

    What was ARPANET and when was it created?

    ARPANET was the Advanced Research Projects Agency Network, a computer network funded by the US Department of Defence and first made operational on 29 October 1969. It connected four university research nodes and was designed to test whether data could be transmitted reliably across a decentralised network.

    What was the first message ever sent on ARPANET?

    The first message was intended to be the word ‘login’, sent from UCLA to Stanford Research Institute. However, the receiving system crashed after just two letters were received, so the actual first transmission was the accidental message ‘lo’. Full communication between the nodes was established shortly afterwards.

    How did packet switching work on ARPANET?

    Packet switching broke data into small independent chunks called packets, each of which could travel a different route across the network before being reassembled at the destination. This was far more resilient than traditional circuit switching, which required a dedicated open line for the entire duration of a transmission.

    When did ARPANET become the internet?

    ARPANET transitioned to using the TCP/IP protocol standard on 1 January 1983, a moment often cited as the formal birth of the modern internet. ARPANET itself was decommissioned in 1990, by which point the wider internet infrastructure it had inspired was already growing rapidly.

    Did the UK have any role in the development of ARPANET?

    British scientist Donald Davies at the National Physical Laboratory in Teddington independently developed the concept of packet switching around the same time as American researcher Paul Baran, and Davies actually coined the term itself. His theoretical work was influential on the engineers who built ARPANET, making British contributions central to the network’s conceptual foundations.

  • The Rise and Fall of Internet Explorer: Microsoft’s Browser That Defined an Era

    The Rise and Fall of Internet Explorer: Microsoft’s Browser That Defined an Era

    Few pieces of software have shaped the experience of everyday computing quite like Internet Explorer. The history of Internet Explorer is, in many ways, the story of the early web itself: a tale of rapid conquest, corporate ambition, technical stagnation, and an eventual, drawn-out farewell that took far longer than most people expected. To understand it properly, you have to go back to the mid-1990s, when the internet was still something most people encountered for the very first time.

    In 1995, Microsoft made a decision that would reshape the browser landscape entirely. Rather than building a browser from scratch, the company licensed the source code from Spyglass Mosaic and used it as the foundation for Internet Explorer 1.0. It was a modest beginning, bundled quietly with the Windows 95 Plus! pack. But Microsoft moved fast. By 1996, Internet Explorer 3.0 had arrived with support for CSS, JavaScript, and plug-ins, making it a credible rival to Netscape Navigator, which had until then enjoyed an almost uncontested position as the gateway to the web.

    Vintage desktop computer setup evoking the history of Internet Explorer in a dimly lit early 2000s home office
    Vintage desktop computer setup evoking the history of Internet Explorer in a dimly lit early 2000s home office

    The Browser Wars: How Internet Explorer Conquered the Web

    The period between 1996 and 2001 became known as the first browser war, and it was fierce. Microsoft had one extraordinary weapon: Windows itself. When Internet Explorer 4.0 launched in 1997, it was bundled directly with Windows 98, meaning that any new computer sold came pre-loaded with Microsoft’s browser. Netscape, which charged for its product, suddenly found itself competing against something that cost nothing and was already sitting on tens of millions of desktops. By 2002, Internet Explorer held roughly 96 per cent of the browser market. That figure is almost impossible to imagine in the fragmented landscape of today.

    The dominance was real, but it came with consequences. With no meaningful competition, Microsoft slowed development dramatically. Internet Explorer 6, released in 2001, became infamous not for what it offered but for how long it outstayed its welcome. It sat largely unchanged for five years. Web developers of that era will still wince at the memory: proprietary rendering quirks, broken box model implementations, and a cavalier relationship with web standards that forced designers to write separate code just to make things look correct in IE. Companies building digital products in the early 2000s, whether creating e-commerce platforms, publishing tools, or emerging optical and display technology services like Droptix, an optical retailer operating in the UK, had to account for IE6’s peculiarities as a core part of their workflow.

    The Slow Decline: Firefox, Chrome, and the Standards Revolution

    The turning point came in 2004 with the release of Mozilla Firefox. Here was a browser built with genuine respect for open standards, offering tabbed browsing, better security, and an extensible architecture that users actually cared about. Firefox didn’t just offer an alternative; it reminded people that browsing the web could be a different kind of experience altogether. Internet Explorer’s market share began to erode, slowly at first, then with increasing speed.

    Close-up of a vintage keyboard and mouse representing the history of Internet Explorer era web browsing
    Close-up of a vintage keyboard and mouse representing the history of Internet Explorer era web browsing

    Then came Google Chrome in 2008, and the erosion became a collapse. Chrome was fast, minimalist, and updated silently in the background, always staying current. Microsoft, meanwhile, continued to iterate on Internet Explorer through versions 7, 8, 9, 10, and 11, each improving on its predecessor but never quite shaking the reputation that had calcified around the brand. By the time IE11 arrived in 2013, many developers had simply stopped designing for it first. The browser had gone from the assumed default to a fallback consideration.

    Microsoft officially retired Internet Explorer 11 in June 2022, ending support for most versions of Windows 10. The browser that had once commanded nearly the entire web had been reduced to a legacy compatibility tool, kept alive mainly because certain enterprise systems, particularly in banking and government, had been built so deeply around IE-specific behaviour that migrating them was genuinely complex and costly.

    What Did Internet Explorer Actually Leave Behind?

    The legacy of Internet Explorer is more complicated than the mockery it attracted in its final years might suggest. Several browser technologies we take for granted today have roots in IE innovations. XMLHttpRequest, the mechanism that underpins AJAX and modern dynamic web applications, was first introduced by Microsoft in Internet Explorer 5. The concept of browser-based rich applications, the kind that power everything from collaborative tools to complex product configuration interfaces used by digital-first retailers such as Droptix, can trace part of its lineage back to experiments Microsoft was running in IE during the early 2000s.

    Internet Explorer also forced the web standards movement to become more rigorous. The chaos of the IE6 era prompted organisations like the W3C to push harder for consistent, enforceable standards, and it motivated browser makers to compete not just on features but on standards compliance. In a strange way, IE’s failures helped build the modern web’s strengths.

    Microsoft itself drew the clearest line under the IE era when it launched Microsoft Edge in 2015, initially with a new rendering engine before eventually rebuilding it on Chromium in 2020. Edge was, in part, an act of institutional contrition: an acknowledgement that the old approach had run its course. The history of Internet Explorer ends not with a bang but with a redirect, as users who still tried to open IE were eventually sent automatically to Edge instead.

    Why the History of Internet Explorer Still Matters

    Understanding the history of Internet Explorer matters because it illustrates how quickly technological dominance can evaporate when complacency sets in. A browser that held 96 per cent of the market was reduced to irrelevance within a decade, not because the web stopped growing but because it grew in directions IE refused to follow. For anyone working in technology, digital product design, or even the specialist online retail space where companies like Droptix operate in the UK, the story serves as a vivid reminder that the infrastructure people use to access the web is never as permanent as it seems.

    Internet Explorer was a product of its moment: ambitious, dominant, and ultimately unwilling to adapt until it was far too late. It shaped how an entire generation learned to use the internet, and the scar tissue it left on web development took years to fully heal. That, perhaps more than any market share figure, is its most enduring legacy.

    Frequently Asked Questions

    When was Internet Explorer first released?

    Internet Explorer 1.0 was released in August 1995, initially bundled with the Windows 95 Plus! pack. It was based on licensed code from Spyglass Mosaic and was a modest early effort that Microsoft rapidly iterated on over the following years.

    Why did Internet Explorer become so dominant in the late 1990s?

    Internet Explorer’s dominance came primarily from Microsoft bundling it directly with Windows 98, which meant it was pre-installed on almost every new PC sold. This made it instantly accessible to millions of users at no extra cost, while its main rival Netscape Navigator charged for its product, making competition extremely difficult.

    What caused the decline of Internet Explorer?

    The decline began with the launch of Mozilla Firefox in 2004, which offered better security, tabbed browsing, and genuine respect for web standards. Google Chrome’s arrival in 2008 accelerated the collapse, as its speed and automatic updates set a new benchmark. Internet Explorer’s reputation for poor standards compliance and slow development made it increasingly hard to defend.

    When did Microsoft officially end support for Internet Explorer?

    Microsoft ended support for Internet Explorer 11 on 15 June 2022 for most Windows 10 versions. After this date, users attempting to open Internet Explorer were redirected to Microsoft Edge. Some very specific enterprise and government systems had extended support arrangements, but the browser was effectively retired for general use.

    Did Internet Explorer contribute anything lasting to web technology?

    Yes, significantly. Microsoft introduced XMLHttpRequest in Internet Explorer 5, which became the foundational technology behind AJAX and modern dynamic web applications. IE also inadvertently strengthened the web standards movement; its widespread non-compliance made browser vendors and standards bodies work harder to establish consistent, enforceable rules that still govern the web today.

  • Link Rot and the Lost Web: How to Excavate a Dead Website

    Link Rot and the Lost Web: How to Excavate a Dead Website

    There is a particular kind of grief that comes from clicking a link and finding nothing. A blank page, a parking domain selling cheap insurance, or the stark white text of a 404 error staring back at you. For anyone who remembers the early web, link rot and dead websites are not just technical inconveniences – they are the quiet erasure of digital history, the internet’s equivalent of a library fire happening in slow motion, one broken URL at a time.

    What Is Link Rot and Why Does It Matter?

    Link rot is the process by which hyperlinks gradually stop working as the pages or domains they point to disappear, move, or change. Studies have suggested that a significant proportion of URLs published even five years ago are no longer functional, and for pages from the early 1990s or early 2000s, the situation is far worse. The web was never designed with permanence in mind. Hosting bills go unpaid, companies fold, hobbyists lose interest, and servers are decommissioned. Each of these mundane events wipes out something that may have been genuinely irreplaceable.

    Think of the small personal homepages hosted on GeoCities – that vast neighbourhood of amateur web publishing that Yahoo shut down in 2009. Millions of pages, built with visible effort and personal pride, covering everything from fan fiction to local history to DIY electronics guides, vanished almost overnight. What remained was fragmentary at best. The loss was not just sentimental; it was cultural. Those pages documented how ordinary people used the early internet, what they cared about, and how they expressed themselves in a medium that was genuinely new.

    404 Pages as Archaeological Sites

    A 404 error is often treated as the end of the road, but for the digital archaeologist, it is actually a starting point. The URL itself is evidence. The domain name, the folder structure, the file name – each element tells a story about when the page was created, what kind of platform hosted it, and how the site was organised. Old URLs from early content management systems, for instance, often contain timestamps or sequential post numbers that reveal the publishing habits of whoever ran the site.

    Dead domains are similarly rich with clues. When a domain expires, it sometimes gets snapped up by domain squatters, but before that happens there is often a window in which the DNS records still exist, the WHOIS history is readable, and cached versions remain accessible. Even the act of a domain changing hands leaves traces. The Internet Archive’s WHOIS database and historical DNS lookup tools can show you who owned a domain, when registration lapsed, and sometimes even the original registrant’s name or organisation.

    How the Wayback Machine Tries to Save Everything

    The most important tool in digital preservation is the Wayback Machine, operated by the Internet Archive, a non-profit organisation based in San Francisco that has been crawling and archiving web pages since 1996. By entering a URL into the Wayback Machine, you can see a calendar of snapshots taken over the years, sometimes going back decades. For many lost sites, these snapshots are the only surviving record.

    But the Wayback Machine has limitations that matter enormously when you are trying to reconstruct a dead website. Crawlers do not capture everything – dynamic content, password-protected pages, Flash animations, and embedded media often survive only partially or not at all. The archive also relies on permission systems; some website owners explicitly opted out using robots.txt files, which means their content was never captured. For the digital historian, this creates gaps that can be frustrating precisely because the absence itself is invisible. You do not always know what you are missing.

    Other Tools for Excavating Vanished Pages

    Beyond the Wayback Machine, a small ecosystem of tools and communities works to preserve and recover lost web content. Google’s cache, though increasingly reduced in scope, occasionally surfaces recent versions of pages that have since disappeared. Academic institutions and national libraries run their own web archives, with the British Library’s UK Web Archive being particularly valuable for British sites – it has been capturing .co.uk and .uk domains systematically since the early 2000s.

    Community-led efforts have also played a vital role. The Archive Team, a volunteer group dedicated to rescuing web content before it disappears, has carried out mass archiving efforts ahead of major platform shutdowns, including the GeoCities closure. Their work, alongside projects like the TEXTFILES.COM archive maintained by Jason Scott, has saved enormous quantities of early internet culture that would otherwise be entirely gone.

    For individual excavation projects, the approach tends to be methodical. Start with the Wayback Machine and note every snapshot date. Cross-reference with Google cache and Bing’s cached pages. Check if the domain ever hosted other sites before or after the one you are researching. Search for quoted text from pages you remember in case other sites quoted or copied that content. Look for mirror sites – in the early web, it was common practice to host mirrors of popular resources across multiple servers, and those mirrors sometimes survived the original.

    Why So Much of the Early Web Is Simply Gone

    The uncomfortable truth about link rot and dead websites is that the early web was built as if it would always exist, by people who had no real framework for understanding digital impermanence. There was no tradition of archiving equivalent to the one that existed for print. Hosting was cheap and informal. Domain registration was a novelty. Nobody thought seriously about what would happen when the money ran out or the enthusiasm faded.

    This makes the surviving fragments all the more precious. A cached GeoCities page, a Wayback Machine snapshot of a now-defunct forum, an old Usenet thread preserved in Google Groups – these are primary sources in the truest sense. They are the unedited, unmediated voices of people who were present at the creation of something genuinely new. Treating them with the same seriousness that a historian would bring to a manuscript or a parish record is not overclaiming their importance. It is simply accurate.

    The archaeology of the dead web rewards patience and curiosity in equal measure. Every broken link is a question worth asking.

    Handwritten notes of old URLs representing the research process of excavating link rot and dead websites
    Digital archaeologist researching link rot and dead websites using archived web records late at night

    Link rot and dead websites FAQs

    What causes link rot and why do websites disappear?

    Link rot happens when websites or individual pages are removed, moved to a different URL, or when their domain registration lapses and is not renewed. The most common causes include hosting costs becoming too high, companies shutting down, platform closures like the GeoCities shutdown, and individual site owners simply losing interest or passing away. Unlike physical documents, digital content has no automatic preservation mechanism, so once it is gone it is often gone permanently unless it was archived.

    How do I use the Wayback Machine to find a deleted website?

    Go to web.archive.org and type the full URL of the website you are looking for into the search bar. The Wayback Machine will show you a calendar view of every date on which a snapshot of that page was captured. Click on any highlighted date to view the archived version of the site as it appeared at that time. Be aware that some elements like images, embedded video, or dynamic content may not have been captured correctly, so older snapshots can sometimes appear broken or incomplete.

    Is there any way to recover a website that has completely disappeared?

    Full recovery is rarely possible, but partial reconstruction often is. The Wayback Machine is the best starting point, but you should also check the British Library’s UK Web Archive for British sites, search for quoted text in other pages that may have referenced the lost content, and look for mirror sites that may have copied the original. If you are trying to recover a domain’s history, WHOIS lookup tools and historical DNS records can reveal previous owners and registration dates, which sometimes leads to other archive sources.

    Why didn’t the Wayback Machine capture a website I’m looking for?

    Several factors can prevent the Wayback Machine from capturing a site. If the website’s robots.txt file contained instructions blocking crawlers, the Internet Archive would have respected that and not archived the content. Sites behind login walls, paywalls, or heavy dynamic scripting were also difficult to crawl accurately. Some sites were simply not popular or linked-to enough to attract the Archive’s crawler during the window when they were live. Community archiving projects like the Archive Team sometimes filled these gaps, but coverage is never complete.

    What is the Archive Team and how does it help preserve the old web?

    The Archive Team is a volunteer collective dedicated to rescuing digital content before major platforms shut down or delete their data. They have carried out large-scale archiving projects ahead of closures including GeoCities, Geocities-adjacent sites, and numerous social platforms. Their archived collections are often donated to the Internet Archive and made publicly accessible. Unlike automated crawlers, Archive Team volunteers can sometimes capture content that requires human navigation or login credentials, making their work particularly valuable for preserving community-built spaces on the early web.

  • The Golden Age of Instant Messaging: How ICQ, MSN and AIM Shaped a Generation Online

    The Golden Age of Instant Messaging: How ICQ, MSN and AIM Shaped a Generation Online

    The history of instant messaging is not simply a story about technology. It is a story about identity, belonging, and the very human need to be seen – refracted through a dial-up connection and a blinking cursor. Before social media feeds and smartphone notifications, there were four programs that dominated the digital lives of young people: ICQ, MSN Messenger, AIM, and Yahoo! Messenger. Each left its mark like a fingerprint on the early internet.

    Logging On Was a Performance

    In the late 1990s, ICQ arrived with a sound that still triggers nostalgia in anyone who heard it – the hollow, almost cartoonish “uh-oh” that announced a new message. ICQ, whose name was a phonetic play on “I seek you”, was among the first to give ordinary people a persistent online identity through a unique number. Your ICQ number was yours, like a digital passport. People memorised them. Lower numbers implied seniority, a kind of unspoken social currency.

    Then came AIM – AOL Instant Messenger – which dominated North American households through the early 2000s. Across the Atlantic, MSN Messenger became the platform of choice for British teenagers. Both shared something important: the away message. Those short, often cryptic strings of text – a song lyric, a vague emotional declaration, a quote clearly aimed at one specific person – functioned as early status updates. They were performative in a way that felt entirely authentic at the time.

    The Unwritten Rules of the Digital Doorstep

    The history of instant messaging cannot be told without acknowledging the elaborate social etiquette that grew around it. Logging off without warning was considered rude. Being listed in someone’s “favourites” on MSN Messenger meant something. Blocking a person was a declaration of war. Appearing “online” when you did not want to talk required switching to “busy” or the more passive-aggressive “away”, hoping nobody would notice you were still lurking.

    Yahoo! Messenger brought its own flavour to the mix, with customisable avatars and a slightly older, more eclectic user base. Its emoticons were louder and more animated than its rivals, and its chat rooms offered a wilder, less curated social experience. Each platform had its own personality, and users often ran two or three simultaneously, toggling between windows like digital social butterflies.

    Sounds as Cultural Memory

    What makes these platforms remarkable as historical artefacts is how deeply their sounds became embedded in memory. The MSN nudge. The AIM door-opening sound when a contact came online. The ICQ “uh-oh”. These were not merely notifications – they were Pavlovian triggers tied to anticipation, excitement, and the particular giddiness of early teenage connection. No algorithm curated these interactions. You simply waited, and then someone appeared.

    Identity Before the Profile Picture

    Long before profile photographs became the dominant mode of online self-presentation, screen names carried the weight of identity. Choosing your AIM handle or your MSN display name was a considered act. Teenagers cycled through names that signalled their music taste, their mood, their aspirations. Your username was the earliest form of personal branding most young people had ever encountered.

    The history of instant messaging is, in many ways, the prehistory of everything that came after – the status update, the story, the vibe check. These platforms taught a generation how to perform the self in digital space, how to signal emotion through punctuation, and how to maintain friendships across distances that would once have meant silence.

    Why These Platforms Still Matter

    Most of these services no longer exist in their original form. MSN Messenger was retired in 2013. AIM followed in 2017. ICQ has dwindled to near-obscurity. Yet their influence on how we communicate online is immeasurable. Understanding the history of instant messaging helps us understand the shape of modern digital culture – because so much of what we take for granted today was first practised, awkwardly and beautifully, in those blinking chat windows.

    Teenager at a vintage desktop computer capturing the history of instant messaging in the early 2000s
    Vintage digital media and CD-ROMs representing artefacts from the history of instant messaging

    History of instant messaging FAQs

    What was the first widely used instant messaging service?

    ICQ, launched in 1996 by an Israeli company called Mirabilis, is widely considered the first instant messaging service to gain mainstream popularity. It introduced the concept of a persistent online identity through unique user numbers and was later acquired by AOL in 1998.

    Why did MSN Messenger become so popular in the UK?

    MSN Messenger benefited enormously from being bundled with Windows and tied to Hotmail, which was already one of the most popular email services in the UK. Its simplicity, familiar contact lists, and features like display pictures and personal messages made it the go-to platform for British teenagers throughout the early 2000s.

    When did the major instant messaging platforms shut down?

    MSN Messenger was officially discontinued in 2013, having been replaced by Skype within Microsoft’s ecosystem. AOL Instant Messenger (AIM) was shut down in December 2017. Yahoo! Messenger was retired in 2018. ICQ continues to exist in a limited form but is a shadow of its former self.

  • The First Online Shopping Experiences: What It Was Really Like to Buy Things on the Early Internet

    The First Online Shopping Experiences: What It Was Really Like to Buy Things on the Early Internet

    Early internet shopping was not the slick, one-click experience we know today. It was slow, strange, and required a leap of faith that most people simply were not willing to make. And yet, from these clunky, uncertain beginnings, an entire commercial world was born.

    Before the Basket: The Internet as a Catalogue

    In the early 1990s, the web was barely functional as a shopping destination. Most people were still dialling in on 56k modems, waiting minutes for a single image to load. The idea of typing your bank card number into a computer felt, to most, like handing your wallet to a stranger in a dark alley. Retailers who did attempt to sell online had websites that looked closer to a printed leaflet than anything resembling a shop. Navigation was guesswork, product descriptions were sparse, and photographs – if they existed at all – were tiny, blurry squares.

    Yet the curiosity was there. Catalogues had been selling by post for decades, and the internet felt like a natural extension of that idea – only faster. The question was whether anyone could make it trustworthy enough to actually hand over money.

    The Pioneers Who Made Early Internet Shopping Possible

    A handful of companies took the risk in the mid-1990s. Amazon began as an online bookshop in 1995, a deliberately safe product to test the waters – books were uniform, easy to describe, and cheap enough that a bad purchase would not ruin anyone. Around the same time, eBay launched as a peer-to-peer auction site. Both ventures succeeded partly because they started small and built trust gradually.

    In the UK, the story was slightly different. British consumers were cautious by nature, and broadband was years away from being widespread. Early internet shopping here often meant waiting days for a dial-up connection to complete a transaction, only to receive a confirmation letter in the post rather than an email. The infrastructure simply was not ready for the ambition.

    What Shopping Actually Felt Like in 1999

    By the late 1990s, things had improved marginally. Secure payment gateways had been introduced, and the padlock icon in your browser offered some reassurance. Still, early internet shopping involved a peculiar ritual: carefully reading every page of a website’s security policy, printing out your order confirmation as proof it had actually happened, and then waiting anxiously to see whether anything arrived.

    Customer service was handled by email with response times measured in days. Returns were a complicated affair involving printed forms and trips to the post office. There was no live chat, no tracking link, and no guarantee that anyone was monitoring the inbox at all. Shopping this way demanded patience that modern consumers would find almost unimaginable.

    How Trust Was Eventually Built

    What changed everything was not technology alone – it was reputation. User reviews, which Amazon introduced in the late 1990s, gave shoppers something to hold onto. If a hundred other people had bought a product and found it acceptable, perhaps it was safe to try. This social proof became the foundation on which the entire industry was rebuilt.

    Today, we carry that entire history in our pockets. Modern tools have compressed decades of development into apps and instant checkouts. If you want to explore what shopping looks like now for local communities, a free uk shopping app shows just how far things have come from those nervous early days of typing card numbers into a 640-pixel-wide browser window.

    A Legacy Worth Remembering

    The story of early internet shopping is really a story about human trust – how it is built slowly, broken easily, and once established, becomes the invisible foundation of everything. The awkward, stuttering beginnings of online retail shaped every expectation we now take for granted. Every smooth checkout, every next-day delivery, every saved basket owes something to those uncertain pioneers who clicked “buy” before they truly believed it would work.

    Person typing carefully on an old keyboard during the early internet shopping era in a 1990s home office
    Stacked vintage cardboard parcels by a doorway representing early internet shopping deliveries from the 1990s

    Early internet shopping FAQs

    What was the very first thing ever sold online?

    The claim most often repeated is that a Sting CD was sold via NetMarket in the United States in 1994, making it one of the earliest recorded secure online transactions. However, informal trades and sales had taken place over early networks before that, so pinpointing a true ‘first’ is difficult.

    Why were people so reluctant to try early internet shopping?

    The main concern was security. Entering payment details into a website felt deeply unfamiliar and risky at a time when most people had no understanding of encryption. Slow internet speeds, poorly designed websites, and a lack of any trusted reviews or guarantees also made the experience feel unreliable compared to walking into a shop.

    How did online shopping change the British high street?

    The shift was gradual rather than sudden. Throughout the early 2000s, more consumers grew comfortable with buying online, which began drawing footfall away from physical shops. By the 2010s the effect was significant, with many established retailers closing stores or restructuring entirely to compete with online-only rivals.

  • When Forums Felt Like Small Towns: A History of Classic Message Boards

    When Forums Felt Like Small Towns: A History of Classic Message Boards

    If you want to understand early online community life, you have to walk through the history of classic message boards. Before timelines and algorithms, there were flat lists of threads, avatars the size of postage stamps, and moderators who felt more like village elders than platform staff.

    The history of classic message boards begins with dial-up echoes

    The story really starts with bulletin board systems, or BBSes. In the 1980s and early 1990s, these were often a single computer in someone’s spare room, connected to a phone line. You dialled in, one person at a time, and left messages in text-only forums. Every BBS had its own flavour: some were devoted to local clubs, others to roleplaying games or underground music. The etiquette was shaped by scarcity – phone lines and hard drives were limited – so users learned to be concise, respectful, and to clean up after themselves.

    As dial-up became more common and the web arrived, the BBS spirit moved into the browser. Early web forums looked plain, but they carried over that sense of a shared, finite space where everyone could see everyone else’s words. You could almost hear the modem squeal as new posts appeared.

    phpBB, vBulletin and the rise of the forum engine

    The late 1990s and early 2000s saw the tools of community building standardise. This is where the history of classic message boards becomes recognisable. Software like phpBB, vBulletin, Invision Power Board and SMF turned forums into modular, customisable towns. An admin could rent a bit of web hosting, upload some files, and suddenly they had a bustling square for fans of a band, a game, or an obscure hobby.

    These engines shared familiar landmarks: index pages listing categories, threads sorted by latest reply, user profiles with join dates and post counts, and private messages that felt like passing notes behind the scenes. Skins and themes gave each forum its own architectural style. Some were dark and moody, others pastel and friendly, but the floorplan was always similar enough that a seasoned forum-goer could navigate by instinct.

    Moderation in the age of village elders

    Moderation on these boards felt personal. At the top sat an administrator, often the founder, who paid the bills and set the rules. Below them, moderators patrolled individual sections. Their names glowed in different colours, and their tools were simple but powerful: move, merge, lock, delete, warn, ban.

    Unlike modern platforms, there was rarely a distant, faceless policy team. Rules were written in sticky threads, debated openly, and amended as the community grew. A moderator might step into a heated thread like a local constable, remind everyone to “attack ideas, not people”, and split arguments into a separate topic. Repeat troublemakers were not just usernames to be removed, but regulars whose absence would be noticed and discussed.

    Because these places felt small, reputation mattered. Users learned to quote properly, avoid derailing topics, and respect the “no politics” or “no spoilers” lines chalked on the virtual pavement. Infractions were often met with public explanations, which quietly taught newcomers how to behave.

    How message boards archived knowledge by accident

    One of the most remarkable parts of the history of classic message boards is how they became accidental libraries. Forums were built for conversation, not preservation, yet they ended up storing vast amounts of practical and cultural knowledge.

    Sticky threads acted like noticeboards: FAQs, guides, and “read this before posting” collections. Long-running “megathreads” documented years of troubleshooting, fan theories, and personal stories. Search functions were clunky, but dedicated users learned advanced tricks, using titles, prefixes and tags to make future retrieval easier.

    Over time, these message boards formed layered archives. Old posts were rarely deleted, only pushed further back in the pagination. Newcomers would arrive via a search engine, land in a ten-year-old thread, and tentatively reply, resurrecting it from the depths. Veterans would smile at the “thread necromancy”, then patiently answer again, often linking to the original guides they had written.

    People in an early internet cafe participating in the history of classic message boards
    Archival computer corner symbolising the preserved history of classic message boards

    History of classic message boards FAQs

    What were classic message boards used for?

    Classic message boards were used to create focused communities around shared interests, from games and music to programming and local clubs. People asked questions, shared guides, debated ideas and built long running friendships in public threads that anyone in the community could read and join.

    How did moderation work on early forums?

    Moderation on early forums was handled by administrators and volunteer moderators drawn from the community. They enforced written rules, moved or locked threads, issued warnings and bans, and often explained their decisions in public, which helped shape a shared sense of etiquette and acceptable behaviour.

    Why did many classic forums disappear?

    Many classic forums disappeared as social media and chat platforms drew activity away, leaving message boards quieter and harder to justify hosting. Some were shut down when their owners could no longer maintain them, while others simply faded, remaining online as quiet archives rather than active communities.