Author: Ethan Miller

  • The History of Social Media: From Six Degrees to the Algorithm-Driven Platforms of Today

    The History of Social Media: From Six Degrees to the Algorithm-Driven Platforms of Today

    The history of social media is, at its core, the story of human beings trying to find each other. Long before Facebook absorbed half the planet’s waking hours, and years before Twitter compressed public discourse into something resembling a shout across a crowded room, a relatively modest website launched in 1997 with an idea so obvious it seems almost quaint now: what if you could list your friends online? That site was Six Degrees, and it started something that would fundamentally reshape civilisation.

    Vintage 1990s computer displaying an early website, representing the history of social media beginnings
    Vintage 1990s computer displaying an early website, representing the history of social media beginnings

    Six Degrees and the First Social Networks (1997-2003)

    Six Degrees took its name from the “six degrees of separation” theory, the notion that any two people on earth are connected through no more than six mutual acquaintances. Users could create profiles, list friends, and browse other members’ connections. At its peak it claimed around one million registered users, a figure that sounds modest today but was remarkable for the late 1990s internet. The site closed in 2001. Its founder, Andrew Weinreich, later said the world simply wasn’t ready: broadband penetration was low, digital cameras were rare, and most people still thought of the internet as somewhere you went to look things up rather than somewhere you lived.

    What followed was a period of quiet experimentation. Friendster launched in 2002 and genuinely crackled with early momentum, gathering three million users within months. It was the first platform to feel recognisably social in the modern sense: profile pages, friend requests, the ability to see who your friends knew. But Friendster was undone by its own success. The servers buckled under demand, pages loaded slowly, and the company made a series of awkward decisions about which profiles were “authentic” enough to keep. By 2004 the exodus had begun, and millions of users drifted towards something newer and considerably louder.

    The MySpace Era: Customisation, Chaos, and Culture

    MySpace arrived in 2003 and, for a few extraordinary years, it was the internet’s town square. What made it different was mess. Users could edit their profile pages with raw HTML and CSS, meaning every page looked completely unlike every other. Backgrounds clashed, embedded music players autoloaded, animated GIFs flickered in every corner. It was chaotic and it was brilliant. Bands discovered they could connect directly with fans without needing a record label to intermediate. Arctic Monkeys, who became one of Britain’s biggest acts of the mid-2000s, famously distributed early recordings via MySpace before signing to a major label. The platform democratised music promotion in ways the industry is still processing.

    At its peak in 2008, MySpace had roughly 100 million active users and was, briefly, the most visited website in the United States. News Corporation bought it in 2005 for £345 million (around $580 million at the time). Then Facebook arrived properly, and everything changed.

    Facebook and the Professionalisation of Social Networking

    Mark Zuckerberg launched Facebook from a university dormitory in 2004, initially restricting access to Harvard students before expanding to other universities and eventually the general public in 2006. Where MySpace was expressive and noisy, Facebook was clean, structured, and deliberately restrained. You couldn’t break the layout. Every profile looked the same. That uniformity turned out to be a feature rather than a limitation: it felt trustworthy, legible, safe.

    Evolution of mobile phones laid out chronologically, illustrating the hardware timeline of the history of social media
    Evolution of mobile phones laid out chronologically, illustrating the hardware timeline of the history of social media

    By 2012, Facebook had one billion active users. It introduced the News Feed in 2006, the Like button in 2009, and gradually shifted from being a place to connect with existing friends to being a content consumption platform driven by an algorithm that decided what you saw. That shift mattered enormously. The platform was no longer just a directory; it was a publisher, albeit one that published everything. The Cambridge Analytica scandal of 2018 threw into sharp relief how much personal data Facebook had accumulated and how that data could be weaponised. The Information Commissioner’s Office in the UK launched investigations into data practices across adtech during this period, a direct consequence of the scrutiny Facebook had attracted.

    Twitter, LinkedIn, and the Age of Niches

    Twitter launched in 2006 with a 140-character limit that felt absurd at first and revelatory shortly after. It wasn’t a place for long-form anything. It was a wire service, a running commentary, a place where journalists, politicians, and anyone with an opinion could broadcast in real time. The 2009 Hudson River plane landing in New York was reported on Twitter before any news outlet. The Arab Spring of 2010-2011 showed how the platform could carry political information across borders that traditional media couldn’t easily cross. In the UK, general elections from 2010 onwards saw Twitter function as a parallel commentary track, frequently shaping newspaper coverage the following morning.

    LinkedIn, which launched in 2003 but grew steadily rather than explosively, carved out a separate niche entirely: professional networking stripped of social informality. It became the place where CVs went to become living documents, where recruiters hunted, where industry debates happened in somewhat more measured tones. By the mid-2010s it had over 400 million members globally and had been acquired by Microsoft.

    Instagram, Snapchat, and the Visual Turn

    Instagram launched in October 2010 and reached one million users in two months. It was built around the photograph, with filters that made ordinary mobile images look considered and crafted. Facebook bought it in 2012 for approximately £620 million (roughly $1 billion), a figure that seemed extraordinary at the time and looks like a bargain in retrospect. Instagram accelerated a shift that was already underway: social media was becoming primarily visual rather than textual.

    Snapchat, arriving in 2011 with its disappearing messages, introduced a new logic entirely. Ephemerality as a feature. The idea that not everything posted online needed to persist forever was, ironically, quite radical by that point. Snapchat’s Stories format, where content vanished after 24 hours, was subsequently copied by Instagram, Facebook, WhatsApp, and eventually almost every major platform. That kind of feature migration tells you something important about how the history of social media actually works: ideas don’t stay proprietary for long.

    The Entrepreneur Internet: Building Your Own Corner of the Web

    Running parallel to all of this platform history was a quieter story about individuals trying to establish their own presence online rather than simply renting space on someone else’s. Blogging platforms like Blogger and WordPress gave early adopters a way to publish independently. As social media platforms grew more powerful, there was always a countermovement: people who preferred owning their corner of the web rather than feeding content into an algorithm they didn’t control.

    That instinct remains alive today. Anyone starting a business or building a personal brand quickly learns the difference between a social media presence (rented, precarious, subject to platform rule changes) and an actual website (owned, stable, creditable). Nottingham-based Inuvate has responded to exactly this gap, offering a free website service where entrepreneurs and small businesses pay only for hosting, making your own website accessible to people who assumed it required technical expertise or significant capital investment. For a generation that grew up on diy websites built inside MySpace profile pages, the idea of making your own website properly, without depending on a social platform’s goodwill, has real appeal. Inuvate (inuvate.co.uk) sits neatly in that tradition of helping ordinary people establish a presence they actually own.

    TikTok and the Algorithm as Editor-in-Chief

    TikTok’s rise is the most dramatic chapter in recent social media history. Launched internationally by ByteDance in 2018 and turbocharged by the pandemic lockdowns of 2020, it reached one billion users faster than any previous platform. Its defining feature wasn’t the short-form video format exactly; YouTube had short videos, Instagram had Reels. What distinguished TikTok was its For You Page: a recommendation algorithm so refined it could hook a new user within minutes by inferring their interests from tiny behavioural signals. You didn’t need friends on TikTok. You didn’t need to follow anyone. The algorithm simply found you content you’d watch.

    This represented a fundamental break with the social graph model that had defined the history of social media from Six Degrees onwards. Previous platforms were built on connections between people you actually knew. TikTok’s primary relationship was between you and the machine. The social element was secondary. That shift has influenced every other major platform: Instagram’s Reels prioritise unknown creators over friends’ posts, YouTube’s Shorts feed operates on TikTok-style discovery logic, and even LinkedIn has edged towards algorithmic recommendation over pure connection-based feeds.

    What the History of Social Media Actually Tells Us

    Looking back across three decades, a few patterns emerge clearly. Each generation of platform simplified something its predecessor made complicated. Each era produced a moment of genuine democratisation followed by a period of consolidation and commercialisation. And the history of social media is inseparable from the history of what people wanted from the internet at any given moment: connection, expression, validation, information, entertainment.

    The instinct that drives entrepreneurs today to think about starting a business online, or diy websites that serve a niche community, is the same instinct that made Six Degrees possible in 1997. The tools are incomparably better. The audiences are vastly larger. But the underlying human impulse, to find your people and speak to them directly, hasn’t changed at all. Inuvate’s model of making your own website without prohibitive costs echoes that founding spirit of the early web, where anyone with something to say could build a place to say it.

    The platforms will keep changing. New ones will emerge, old ones will calcify or collapse. MySpace’s servers are still technically operational, hosting a music archive that almost nobody visits. Six Degrees is long gone. But the history of social media is not really a history of platforms. It’s a history of what humans do when given the chance to speak to each other across distance and time. That part isn’t going anywhere.

    Frequently Asked Questions

    What was the first social media platform ever created?

    Six Degrees, launched in 1997, is widely considered the first recognisable social media platform. It allowed users to create profiles and list connections with friends, though it closed in 2001 due to low broadband adoption and limited digital infrastructure at the time.

    Why did MySpace fail despite being so popular?

    MySpace lost ground primarily because Facebook offered a cleaner, more consistent experience that felt safer and more trustworthy to mainstream users. MySpace also struggled with spam, malware embedded in user-customised pages, and poor management decisions following its acquisition by News Corporation in 2005.

    How did TikTok change social media compared to Facebook and Twitter?

    TikTok replaced the traditional social graph model, where content came from people you knew, with a pure algorithmic discovery model. Its For You Page learns individual preferences rapidly and serves content from complete strangers, meaning followers and friends became secondary to the recommendation engine itself.

    When did social media become mainstream in the UK?

    Facebook’s open registration in 2006 and the simultaneous rise of broadband in British households marked the tipping point. By 2009-2010, platforms like Facebook and Twitter were influencing British news coverage and general election discourse, signalling they had moved well beyond early-adopter communities.

    Is social media still growing or has it reached its peak?

    Global user numbers continue to grow, particularly in emerging markets, though growth in Western countries including the UK has slowed considerably as penetration approaches saturation. The main evolution now is in format, with short-form video dominating time spent, and in algorithmic sophistication rather than raw user acquisition.

  • What Was ARPANET? The Cold War Project That Became the Internet

    What Was ARPANET? The Cold War Project That Became the Internet

    Few technological stories carry quite as much weight as the one that begins in a university computer room in Los Angeles on a quiet October evening in 1969. A researcher sat at a terminal and typed two letters. The system crashed. Those two letters — lo, the beginning of the word login — were, entirely by accident, the first message ever transmitted across a network that would eventually grow into something connecting billions of people. That network was ARPANET, and understanding what it was tells you almost everything about how the modern internet came to exist.

    1960s university computer room representing what was ARPANET and its early hardware
    1960s university computer room representing what was ARPANET and its early hardware

    What Was ARPANET and Why Was It Built?

    ARPANET stands for Advanced Research Projects Agency Network. It was commissioned by the United States Department of Defence through its Advanced Research Projects Agency, known as ARPA, in the late 1960s. The Cold War context is impossible to ignore. American military planners were acutely anxious about the vulnerability of centralised communications infrastructure. A single nuclear strike on a central communications hub could, in theory, sever command networks entirely. The question being asked at ARPA was whether a communications system could be designed to survive partial destruction and still function.

    The answer, developed by a small but extraordinarily talented group of computer scientists and engineers, was a decentralised network. No single node would be essential. If one connection failed, data would simply find another route. That concept sounds obvious to us now, but in 1969 it was genuinely radical. Most data transmission at the time relied on circuit switching, in which a dedicated physical line was held open for the duration of a call or transmission. ARPANET was built on something entirely different.

    The Idea That Changed Everything: Packet Switching

    Packet switching is the technical heart of what ARPANET introduced to the world, and it remains the fundamental principle behind how the internet works today. Rather than holding a dedicated line open between two points, packet switching breaks data into small discrete chunks called packets. Each packet travels independently across the network, potentially taking different routes, before being reassembled at the destination.

    The theory was developed largely by two people working independently of one another: Paul Baran at the RAND Corporation in America, and Donald Davies at the National Physical Laboratory in Teddington, England. Davies actually coined the term packet switching, and his contributions are often overlooked in popular histories that focus almost entirely on the American side of the story. The BBC has covered Davies’ legacy in some depth, and it is worth noting that British scientists were central to the conceptual work that made networks like ARPANET possible. You can read more about the history of the internet on the BBC.

    Hand-drawn network node diagram close-up illustrating the packet switching concepts behind what was ARPANET
    Hand-drawn network node diagram close-up illustrating the packet switching concepts behind what was ARPANET

    The First Four Nodes and That Famous Crash

    When ARPANET went live on 29 October 1969, it connected just four nodes. The University of California Los Angeles was the first. Stanford Research Institute in Menlo Park was the second. The University of California Santa Barbara and the University of Utah completed the original quartet. Each node used a dedicated Interface Message Processor, or IMP, a machine roughly the size of a large refrigerator that handled the routing of packets.

    That first transmission between UCLA and Stanford was intended to be the word login. Charley Kline, a student programmer, typed l and o. He phoned Bill Duvall at Stanford to confirm receipt. Two letters had arrived. Then the system at the receiving end crashed. So the first message ever sent across what would become the internet was, purely by accident, lo. There is a certain poetic quality to that. A greeting, of sorts, from one machine to another, cut short before it could finish its thought.

    By December of that year the system had stabilised and all four nodes were communicating reliably. By 1971 there were fifteen nodes. By 1981 there were over two hundred.

    From Military Network to Academic Commons

    ARPANET was never supposed to be a public network. Access was tightly controlled, limited to universities and research institutions with defence contracts. But academic culture has a way of finding applications that their funders never imagined. Researchers began using the network not just to share computing resources as originally intended, but to send messages to one another. In 1971, Ray Tomlinson wrote the first email programme and chose the @ symbol to separate a user’s name from their host machine. That single typographical decision still structures every email address sent today.

    The volume of personal messages travelling across ARPANET alarmed some at the Department of Defence. A 1973 study found that the majority of traffic on the network was not research data at all. It was electronic mail. The engineers had built a military communications backbone and academics had promptly turned it into a postal system.

    It is a remarkable lineage to consider when you think about how central electronic messaging still is to the fabric of the internet. Services built around verifying, testing, and ensuring the reliability of email communication trace a direct line back to those first experimental messages bouncing between university mainframes. Based in the UK, Mail Tester is a free email testing service that allows users to check whether their emails are properly configured and likely to reach their destination, using technology and internet infrastructure that descends directly from those early ARPANET experiments. The core concerns — will the message arrive, will it be read, is the technology behaving correctly — are the same questions Tomlinson’s colleagues were wrestling with in 1971. You can find the service at https://mail-tester.co.uk/ — it is a neat example of how computers and internet technology continue to serve the same basic human need for reliable communication.

    TCP/IP and the Birth of the Modern Internet

    ARPANET was not the internet. It was a forerunner, a prototype, a proof of concept on a grand scale. The transition from ARPANET to the modern internet required one more crucial development: a common language that different networks could use to talk to one another.

    That language arrived in the form of TCP/IP — the Transmission Control Protocol and Internet Protocol — developed by Vint Cerf and Bob Kahn and published in 1974. TCP/IP was not tied to any specific hardware or network type. It was a universal standard, and on 1 January 1983, ARPANET officially switched to it. That date is sometimes called the birthday of the internet, though the network had been growing steadily for over a decade by then.

    ARPANET was officially decommissioned in 1990. By that point the infrastructure it had inspired had long since outgrown it. Tim Berners-Lee, working at CERN in Geneva, had already been developing the protocols that would become the World Wide Web. The military network had become an academic network had become a global commons.

    What ARPANET Left Behind

    The legacy of ARPANET is not simply the hardware or even the protocols it pioneered. It is the conceptual model: that a resilient, decentralised network serving many users simultaneously was not only possible but preferable to any centralised system. Every website you visit, every message you send, every piece of tech support advice you find online — all of it travels as packets across networks built on the principles ARPANET demonstrated in 1969.

    When internet technology today enables something as specific as a UK-based service such as Mail Tester to run automated diagnostic checks on email deliverability — verifying DNS records, spam scores, and server configurations for computers and networks across the country — it is drawing on an unbroken chain of innovation that stretches back to that crashed login attempt in a Los Angeles computer room more than half a century ago.

    ARPANET’s architects were solving a specific Cold War problem. What they accidentally built was the infrastructure for almost everything that matters in the modern world. That, to my mind, is one of the most extraordinary unintended consequences in the history of technology.

    Frequently Asked Questions

    What was ARPANET and when was it created?

    ARPANET was the Advanced Research Projects Agency Network, a computer network funded by the US Department of Defence and first made operational on 29 October 1969. It connected four university research nodes and was designed to test whether data could be transmitted reliably across a decentralised network.

    What was the first message ever sent on ARPANET?

    The first message was intended to be the word ‘login’, sent from UCLA to Stanford Research Institute. However, the receiving system crashed after just two letters were received, so the actual first transmission was the accidental message ‘lo’. Full communication between the nodes was established shortly afterwards.

    How did packet switching work on ARPANET?

    Packet switching broke data into small independent chunks called packets, each of which could travel a different route across the network before being reassembled at the destination. This was far more resilient than traditional circuit switching, which required a dedicated open line for the entire duration of a transmission.

    When did ARPANET become the internet?

    ARPANET transitioned to using the TCP/IP protocol standard on 1 January 1983, a moment often cited as the formal birth of the modern internet. ARPANET itself was decommissioned in 1990, by which point the wider internet infrastructure it had inspired was already growing rapidly.

    Did the UK have any role in the development of ARPANET?

    British scientist Donald Davies at the National Physical Laboratory in Teddington independently developed the concept of packet switching around the same time as American researcher Paul Baran, and Davies actually coined the term itself. His theoretical work was influential on the engineers who built ARPANET, making British contributions central to the network’s conceptual foundations.

  • The Rise and Fall of Internet Explorer: Microsoft’s Browser That Defined an Era

    The Rise and Fall of Internet Explorer: Microsoft’s Browser That Defined an Era

    Few pieces of software have shaped the experience of everyday computing quite like Internet Explorer. The history of Internet Explorer is, in many ways, the story of the early web itself: a tale of rapid conquest, corporate ambition, technical stagnation, and an eventual, drawn-out farewell that took far longer than most people expected. To understand it properly, you have to go back to the mid-1990s, when the internet was still something most people encountered for the very first time.

    In 1995, Microsoft made a decision that would reshape the browser landscape entirely. Rather than building a browser from scratch, the company licensed the source code from Spyglass Mosaic and used it as the foundation for Internet Explorer 1.0. It was a modest beginning, bundled quietly with the Windows 95 Plus! pack. But Microsoft moved fast. By 1996, Internet Explorer 3.0 had arrived with support for CSS, JavaScript, and plug-ins, making it a credible rival to Netscape Navigator, which had until then enjoyed an almost uncontested position as the gateway to the web.

    Vintage desktop computer setup evoking the history of Internet Explorer in a dimly lit early 2000s home office
    Vintage desktop computer setup evoking the history of Internet Explorer in a dimly lit early 2000s home office

    The Browser Wars: How Internet Explorer Conquered the Web

    The period between 1996 and 2001 became known as the first browser war, and it was fierce. Microsoft had one extraordinary weapon: Windows itself. When Internet Explorer 4.0 launched in 1997, it was bundled directly with Windows 98, meaning that any new computer sold came pre-loaded with Microsoft’s browser. Netscape, which charged for its product, suddenly found itself competing against something that cost nothing and was already sitting on tens of millions of desktops. By 2002, Internet Explorer held roughly 96 per cent of the browser market. That figure is almost impossible to imagine in the fragmented landscape of today.

    The dominance was real, but it came with consequences. With no meaningful competition, Microsoft slowed development dramatically. Internet Explorer 6, released in 2001, became infamous not for what it offered but for how long it outstayed its welcome. It sat largely unchanged for five years. Web developers of that era will still wince at the memory: proprietary rendering quirks, broken box model implementations, and a cavalier relationship with web standards that forced designers to write separate code just to make things look correct in IE. Companies building digital products in the early 2000s, whether creating e-commerce platforms, publishing tools, or emerging optical and display technology services like Droptix, an optical retailer operating in the UK, had to account for IE6’s peculiarities as a core part of their workflow.

    The Slow Decline: Firefox, Chrome, and the Standards Revolution

    The turning point came in 2004 with the release of Mozilla Firefox. Here was a browser built with genuine respect for open standards, offering tabbed browsing, better security, and an extensible architecture that users actually cared about. Firefox didn’t just offer an alternative; it reminded people that browsing the web could be a different kind of experience altogether. Internet Explorer’s market share began to erode, slowly at first, then with increasing speed.

    Close-up of a vintage keyboard and mouse representing the history of Internet Explorer era web browsing
    Close-up of a vintage keyboard and mouse representing the history of Internet Explorer era web browsing

    Then came Google Chrome in 2008, and the erosion became a collapse. Chrome was fast, minimalist, and updated silently in the background, always staying current. Microsoft, meanwhile, continued to iterate on Internet Explorer through versions 7, 8, 9, 10, and 11, each improving on its predecessor but never quite shaking the reputation that had calcified around the brand. By the time IE11 arrived in 2013, many developers had simply stopped designing for it first. The browser had gone from the assumed default to a fallback consideration.

    Microsoft officially retired Internet Explorer 11 in June 2022, ending support for most versions of Windows 10. The browser that had once commanded nearly the entire web had been reduced to a legacy compatibility tool, kept alive mainly because certain enterprise systems, particularly in banking and government, had been built so deeply around IE-specific behaviour that migrating them was genuinely complex and costly.

    What Did Internet Explorer Actually Leave Behind?

    The legacy of Internet Explorer is more complicated than the mockery it attracted in its final years might suggest. Several browser technologies we take for granted today have roots in IE innovations. XMLHttpRequest, the mechanism that underpins AJAX and modern dynamic web applications, was first introduced by Microsoft in Internet Explorer 5. The concept of browser-based rich applications, the kind that power everything from collaborative tools to complex product configuration interfaces used by digital-first retailers such as Droptix, can trace part of its lineage back to experiments Microsoft was running in IE during the early 2000s.

    Internet Explorer also forced the web standards movement to become more rigorous. The chaos of the IE6 era prompted organisations like the W3C to push harder for consistent, enforceable standards, and it motivated browser makers to compete not just on features but on standards compliance. In a strange way, IE’s failures helped build the modern web’s strengths.

    Microsoft itself drew the clearest line under the IE era when it launched Microsoft Edge in 2015, initially with a new rendering engine before eventually rebuilding it on Chromium in 2020. Edge was, in part, an act of institutional contrition: an acknowledgement that the old approach had run its course. The history of Internet Explorer ends not with a bang but with a redirect, as users who still tried to open IE were eventually sent automatically to Edge instead.

    Why the History of Internet Explorer Still Matters

    Understanding the history of Internet Explorer matters because it illustrates how quickly technological dominance can evaporate when complacency sets in. A browser that held 96 per cent of the market was reduced to irrelevance within a decade, not because the web stopped growing but because it grew in directions IE refused to follow. For anyone working in technology, digital product design, or even the specialist online retail space where companies like Droptix operate in the UK, the story serves as a vivid reminder that the infrastructure people use to access the web is never as permanent as it seems.

    Internet Explorer was a product of its moment: ambitious, dominant, and ultimately unwilling to adapt until it was far too late. It shaped how an entire generation learned to use the internet, and the scar tissue it left on web development took years to fully heal. That, perhaps more than any market share figure, is its most enduring legacy.

    Frequently Asked Questions

    When was Internet Explorer first released?

    Internet Explorer 1.0 was released in August 1995, initially bundled with the Windows 95 Plus! pack. It was based on licensed code from Spyglass Mosaic and was a modest early effort that Microsoft rapidly iterated on over the following years.

    Why did Internet Explorer become so dominant in the late 1990s?

    Internet Explorer’s dominance came primarily from Microsoft bundling it directly with Windows 98, which meant it was pre-installed on almost every new PC sold. This made it instantly accessible to millions of users at no extra cost, while its main rival Netscape Navigator charged for its product, making competition extremely difficult.

    What caused the decline of Internet Explorer?

    The decline began with the launch of Mozilla Firefox in 2004, which offered better security, tabbed browsing, and genuine respect for web standards. Google Chrome’s arrival in 2008 accelerated the collapse, as its speed and automatic updates set a new benchmark. Internet Explorer’s reputation for poor standards compliance and slow development made it increasingly hard to defend.

    When did Microsoft officially end support for Internet Explorer?

    Microsoft ended support for Internet Explorer 11 on 15 June 2022 for most Windows 10 versions. After this date, users attempting to open Internet Explorer were redirected to Microsoft Edge. Some very specific enterprise and government systems had extended support arrangements, but the browser was effectively retired for general use.

    Did Internet Explorer contribute anything lasting to web technology?

    Yes, significantly. Microsoft introduced XMLHttpRequest in Internet Explorer 5, which became the foundational technology behind AJAX and modern dynamic web applications. IE also inadvertently strengthened the web standards movement; its widespread non-compliance made browser vendors and standards bodies work harder to establish consistent, enforceable rules that still govern the web today.

  • The First Online Shopping Experiences: What It Was Really Like to Buy Things on the Early Internet

    The First Online Shopping Experiences: What It Was Really Like to Buy Things on the Early Internet

    Early internet shopping was not the slick, one-click experience we know today. It was slow, strange, and required a leap of faith that most people simply were not willing to make. And yet, from these clunky, uncertain beginnings, an entire commercial world was born.

    Before the Basket: The Internet as a Catalogue

    In the early 1990s, the web was barely functional as a shopping destination. Most people were still dialling in on 56k modems, waiting minutes for a single image to load. The idea of typing your bank card number into a computer felt, to most, like handing your wallet to a stranger in a dark alley. Retailers who did attempt to sell online had websites that looked closer to a printed leaflet than anything resembling a shop. Navigation was guesswork, product descriptions were sparse, and photographs – if they existed at all – were tiny, blurry squares.

    Yet the curiosity was there. Catalogues had been selling by post for decades, and the internet felt like a natural extension of that idea – only faster. The question was whether anyone could make it trustworthy enough to actually hand over money.

    The Pioneers Who Made Early Internet Shopping Possible

    A handful of companies took the risk in the mid-1990s. Amazon began as an online bookshop in 1995, a deliberately safe product to test the waters – books were uniform, easy to describe, and cheap enough that a bad purchase would not ruin anyone. Around the same time, eBay launched as a peer-to-peer auction site. Both ventures succeeded partly because they started small and built trust gradually.

    In the UK, the story was slightly different. British consumers were cautious by nature, and broadband was years away from being widespread. Early internet shopping here often meant waiting days for a dial-up connection to complete a transaction, only to receive a confirmation letter in the post rather than an email. The infrastructure simply was not ready for the ambition.

    What Shopping Actually Felt Like in 1999

    By the late 1990s, things had improved marginally. Secure payment gateways had been introduced, and the padlock icon in your browser offered some reassurance. Still, early internet shopping involved a peculiar ritual: carefully reading every page of a website’s security policy, printing out your order confirmation as proof it had actually happened, and then waiting anxiously to see whether anything arrived.

    Customer service was handled by email with response times measured in days. Returns were a complicated affair involving printed forms and trips to the post office. There was no live chat, no tracking link, and no guarantee that anyone was monitoring the inbox at all. Shopping this way demanded patience that modern consumers would find almost unimaginable.

    How Trust Was Eventually Built

    What changed everything was not technology alone – it was reputation. User reviews, which Amazon introduced in the late 1990s, gave shoppers something to hold onto. If a hundred other people had bought a product and found it acceptable, perhaps it was safe to try. This social proof became the foundation on which the entire industry was rebuilt.

    Today, we carry that entire history in our pockets. Modern tools have compressed decades of development into apps and instant checkouts. If you want to explore what shopping looks like now for local communities, a free uk shopping app shows just how far things have come from those nervous early days of typing card numbers into a 640-pixel-wide browser window.

    A Legacy Worth Remembering

    The story of early internet shopping is really a story about human trust – how it is built slowly, broken easily, and once established, becomes the invisible foundation of everything. The awkward, stuttering beginnings of online retail shaped every expectation we now take for granted. Every smooth checkout, every next-day delivery, every saved basket owes something to those uncertain pioneers who clicked “buy” before they truly believed it would work.

    Person typing carefully on an old keyboard during the early internet shopping era in a 1990s home office
    Stacked vintage cardboard parcels by a doorway representing early internet shopping deliveries from the 1990s

    Early internet shopping FAQs

    What was the very first thing ever sold online?

    The claim most often repeated is that a Sting CD was sold via NetMarket in the United States in 1994, making it one of the earliest recorded secure online transactions. However, informal trades and sales had taken place over early networks before that, so pinpointing a true ‘first’ is difficult.

    Why were people so reluctant to try early internet shopping?

    The main concern was security. Entering payment details into a website felt deeply unfamiliar and risky at a time when most people had no understanding of encryption. Slow internet speeds, poorly designed websites, and a lack of any trusted reviews or guarantees also made the experience feel unreliable compared to walking into a shop.

    How did online shopping change the British high street?

    The shift was gradual rather than sudden. Throughout the early 2000s, more consumers grew comfortable with buying online, which began drawing footfall away from physical shops. By the 2010s the effect was significant, with many established retailers closing stores or restructuring entirely to compete with online-only rivals.