Technology 101: Hardware

If you want to work in tech then it’s helpful to have an understanding of the hardware, software and internet technologies that power the industry at large.

We’re going to start our tour with a look at hardware. When we talk about hardware we are referring to the individual components that make up computer systems, as well as the physical devices themselves. This includes all of the devices you regularly use to communicate throughout the day – your computer, mobile phone, smart watch – as well as the electrical components inside them.

Let’s take a look at some common components found in modern hardware devices.


A processor is a hardware component (a physical part of a computer system) that performs calculations on data, then outputs the calculated result for further use.

It’s hard to overstate the importance of the processor in our modern lives. Almost every electronic device we interact with on a day-to-day basis, from our smartphones and smartwatches to our cars, has a processor inside.

What is data?

Data are sets of values or information about an object or person. For example, user data could be the information stored in by application that contains a user’s attributes, like their email address or location.

Central Processing Unit (CPU)

The CPU is the “main” processor. It performs basic calculations, logical comparisons and handles input and output (“I/O”) on the instructions (“code”) that it receives from a computer software program. The CPU usually consists of an integrated circuit with millions or billions of microscopic silicon transistors.  Many of the original companies involved in the early development of silicon transistors were based in California between San Francisco and San Jose, which is why this area has been given the nickname Silicon Valley.

A brief diversion into Bits and Bytes

Modern processors are made up of millions of microscopic transistors. Each transistor acts similar to a switch that can be turned on or off, representing either 0 (off) or 1 (on).

Computers use a system called binary to store information, and a binary digit (commonly known as a bit) is the smallest unit of data in computing, represented by 0 or 1. If we want to store the current state of a bit (0 or 1) then we can use a transistor on a processor, with the ‘off’ state of the transistor representing 0 and the “on” state representing 1.

byte is a unit made up of 8 bits (so eight 0 or 1s in a line, like 01000101). A byte is commonly used to represent a single character of text, like the letter E.

Multicore Processors

Most modern CPUs will contain two or more processing units, or “cores”. A multicore processor can split out instructions to be run on multiple cores at the same time, which increases the speed that computer programs can run.Memory & Storage

Graphics Processing Unit (GPU)

GPUs are a special type of processor that have been designed to process graphical data (the data that is output to a screen – images, video, 3D graphics etc) much faster than a standard processor. They are commonly found in personal computers, laptops, gaming consoles and smartphones. In the past few years there has also been huge demand for GPUs in the crypto mining (eg. Bitcoin) and deep learning (a form of Machine Learning) industries, as the chips work very well with the software these industries use.

Modern CPUs and GPUs are incredibly difficult to produce and there are only a small number of processor fabricators (“fabs”) that have the technical capabilities to manufacture them. The limited supply combined with the large demand has led to a global shortage of GPUs – if you’ve struggled to find a Playstation 5 in the past year or so, this is likely the reason why.

Memory & Storage


A processor’s main job is to perform calculations. It takes this information from memory to calculate, and then sends the calculated information back to memory once the process is completed. The memory is usually positioned very close to the processor which allows it to transfer data at very high speeds, but this speed comes with some costs.


Early computers could only store and reproduce data in the order it was written, which is not very efficient. Random access memory, or RAM, is the name given to memory that can be accessed or changed in any order. RAM is a “volatile” memory, which means that any data it is holding will be lost if the device loses power.

Read Only Memory, or ROM, is non-volatile (it retains information when it loses power) and contains information that cannot be changed. ROM is usually to store software that doesn’t ever need to be changed, like the software that controls the initial loading process when you turn on your computer.

RAM does a great job helping the CPU to perform calculations at high speeds, but it’s volatile nature means it’s not good if we want to store information long-term. RAM is also fairly expensive to manufacture, so if we need to store a lot of information we will need to look elsewhere. That’s where storage steps in.


Storage is the category of computer components that hold large amounts of data in a “persistent”, or non-volatile way (they retain information when they do not have power). Storage tends to be slower than other types of memory like RAM, but it’s also much cheaper – you can currently buy a 1TB external storage drive on Amazon for about $50, while the same amount of RAM would be closer to $10,000.

Those of us who were around in the 90’s and early 2000’s will probably be familiar with Hard Drives (HDD). Hard Drives are persistent storage devices that store data on a spinning “platter”. While they can hold a large amount of information, hard drives are quite slow at reading and writing data. They also contain many moving parts so they aren’t a great choice for devices that are likely to encounter sudden movements or shocks, like laptops or smartphones.

Most modern computers and all mobile devices have moved to using Solid State Drives (SSD), which store data on integrated circuits and are much faster than hard drives at reading and writing data. SSDs tend to also be physically smaller than hard drives and are much better suited to portable devices as they don’t contain any moving parts.

Hard Drives with spinning platters tend to offer the largest storage capacity for the price, so it’s still quite common to use them when speed isn’t a priority, such as for backup storage or to hold large amounts of media, like photo or video collections.


Most desktop and laptop computers have a motherboard, a printed circuit board that holds the individual components in place, provides them with power, and allows them to communicate with each other.

Larger devices like desktop computers usually have motherboards that let you add and remove individual components, making it relatively easy to upgrade or fix the components if they break or if you decide you need something better.

Laptops used to be similar, but in recent years most manufacturers have moved to designs where the components are fixed in place. This allows them to make the devices smaller and potentially more durable as components won’t get knocked out of place. The downside is that if any part fails the entire board will need to be replaced.

System on a chip (SoC)

system on a chip is a special type of integrated circuit that contains all or most of the components it needs, like the CPU, memory and storage, on the circuit board itself. While a motherboard has interchangeable components, all of the components on a SoC are integrated into the circuit board.

Packing all of the integrated components tightly together on one one chip can allow you to increase performance while reducing power consumption. This combination has led to SoCs becoming very widely used in portable devices like smartphones and tablets, where longer battery life is extremely important.


Now that we’ve covered components, let’s take a look at some common hardware devices.


A computer is an electronic device that can be programmed to perform calculations and operations automatically. The first computers were as large as a room and powered by vacuum tubes – thankfully things have improved a little since then.

Personal Computers

A personal computer is simply a computer that has been designed for an individual person to use. The first personal computers were sold in the 1970s, and were extremely limited by today’s standards – most were sold without monitors or storage, and there were hardly any software applications to use. The Apple II, made by Steve Wozniak and Steve Jobs, was one of the biggest selling computers during this time.

The first IBM PC was released in 1981 and used an operating system called PC DOS, which was licensed to IBM by Microsoft. IBM PCs used fairly standard hardware components, and as IBM was only licensing DOS from Microsoft they couldn’t prevent other computer manufacturers from using similar components and also licensing DOS to create “PC Clones” – basically much cheaper versions of the IBM PC. The competition and lower prices led to huge growth in the PC market, and over time the operating systems upgraded from the text-based DOS to the graphical user interface of Windows, and PC clones came to hold the majority of personal computer market share.

PC and Macs

While personal computer can be abbreviated to PC, people generally use that term to refer to personal computers that are the natural progression from the early IBM PCs, using standard components like Intel or AMD x86-based processors, and running Microsoft’s Windows operating system. While most of Apple’s computers are personal computers in the sense that they are used by individuals, Apple have traditionally chosen to use their own system architecture designs and refused to license their operating systems to outside manufacturers, making their systems incompatible with computers that run Windows. These differences have led to people referring to the Windows computers as “PCs”, and Apple’s computers as “Macs” (short for Macintosh, the name of some early Apple computers).

Smartphones and tablets

Early mobile phones were brick-sized portable devices that could make and receive (very expensive) phone calls. As technology progressed they shrank in size and by the late 1990’s they could fit in a pocket and had gained the ability to send and receive short messages, or SMS. By the mid 2000’s network and hardware technologies had evolved to a point where you could buy a phone from a company like Nokia or Ericsson with color screens, very basic web browsers, some built-in games and apps and low quality cameras.

Smartphones and tablets

A smartphone combines mobile telephone services and advanced computing functionality into a portable device. Apple launched the first iPhone in June 2007, and while some people in the tech industry were initially skeptical, consumers quickly fell in love with the devices and their large (for the time) glass touchscreens and Apple-developed operating system. Google launched their first Android phone in 2008, and as more people started to experience the benefits of a “computer in your pocket” the switch to smartphones was soon well underway.

If you think about it, the phone part of the smartphone name now seems a bit outdated – if you’re anything like me then the “Phone” app is one of your least used features on what is basically a portable social media, messaging and entertainment device.

The modern smartphone market is dominated by Android and Apple devices. While Google develops the Android operating system, they allow outside manufacturers like Samsung and Xiaomi to use the system and as a result the majority of smartphones sold globally are Android devices.

Tablets are a category of portable mobile devices with touch screens that are larger than smartphones, but smaller than what you would generally find on a laptop computer. Many companies make tablets, but in terms of sales the category is dominated by Apple’s iPad range.


server is a computer that provides functionally to other devices, called clients. One of the key benefits of servers is that a single server can serve multiple clients at the same time.

If you think about it, this is very similar to how wait staff (servers) work in restaurants. One server is able to take the orders of many tables of customers (the clients), relaying the order information through to the bar and kitchen for them to prepare. This is much more efficient than having each customer try to order their food and drinks directly with the bar and kitchen (having worked in my fair share of restaurants, I’d recommend you never try to order directly from the kitchen during a dinner rush…).

The computer servers we use today generally follow the Request-Response model, where a client sends a request to the server, the server then performs an action, and the server sends the result back to the client. You can see a real world example of this in action when you use your web browser to visit a web page. When you click a link your computer (the client) sends a request to a server. The server responds to the request by sending over a response that contains all the HTML, CSS and Javascript files that your browser needs to render (turn into) into a web page. When you click another link the process repeats, as it does for any other people who are also viewing the website.

While early computers could be large enough to take up an entire level of a university computer science lab, most modern servers are much smaller and made with components that are similar to the kind you’d find in your home computer. If you have multiple servers then it’s common to mount them in racks, which helps you store and provide power to them in a more efficient way.

Data Center

A data center is a building that is dedicated to holding computer systems, usually for telecommunications, networking or storage. Storing the servers and systems together makes it easier for companies to manage and maintain the hardware, and can also have benefits in areas like security and energy use.

It’s sometimes easy to forget just how large the tech world has become. While Meta may have been founded by Mark Zuckerberg and his friends in his Harvard dorm, the company has grown so large that they now have 18 data centers of their own around the world to support products like Facebook, Instagram, WhatsApp and Oculus.

Virtualized Servers

Traditionally, server hardware and software was designed so a server would support one single application – a company’s mail server (handling their email) would be a separate physical device to the server that handles file storage, and so on. While this separation has benefits (for example, it’s easier to find and fix a problem if you know it’s limited to a specific server), it means that many servers spent the majority of time using only a tiny fraction of their processing power. And as networks increased in size, the increased need for servers meant the physical size they were taking up within companies and data centers was also increasing.

Server virtualization helps solve both of these issues. A virtual server is an application that takes one physical server and turns it into multiple virtual machines. The software makes each virtual server operate as if it was a separate physical device. By running multiple virtual servers we can make fuller use of a physical server’s hardware capabilities. By using virtual machines we also don’t need to take up as much physical space as we would if we were operating separate physical servers.

Cloud Servers

Imagine you’re joining an early-stage internet startup in 1995. Your company would need to buy and manage multiple servers – a mail server for company email, web servers for the company website, the list goes on. These servers would cost thousands of dollars to buy and require full time systems engineers to be hired for set up and ongoing system maintenance. The total cost could easily run into hundreds of thousands of dollars or more, making it out of reach for many companies. Thankfully times have changed!

A cloud server is a centralized server of virtual machines that can be accessed remotely over a network (like the internet), allowing multiple users spread across a wide area to share the server’s resources. Large tech companies like Google, Amazon and Microsoft offer cloud computing services to businesses as an alternative to managing their own “on-premise” servers. If we go back to our example above, while a startup in the 90’s had to buy and manage physical servers, modern startups can simply “rent” virtual servers as-needed, at a fraction of the cost. This decreased cost has been a huge benefit to the tech industry, and has helped many small startups get off the ground where previously the server costs would have made it impossible.

We also rely on cloud servers to take care of our personal computing needs. A real world example that you probably use is either Google Drive or Apple iCloud, which store our photos, videos, notes and music centrally so we can access them from anywhere using our phones, computers and tablets.


Imagine you’ve bought the latest Google or Apple smartphone but you are not allowed to connect it to your mobile phone network or Wifi. Your powerful phone is basically limited to being a fairly expensive camera and note taking device, unless you can connect it to other devices or systems to download apps, share pictures and videos, and chat with friends. The device’s potential comes from the network.

A network is formed when 2 or more connected devices (computers, smart phones, smart home assistants etc) connect to each other to share data and resources. A network can be as small as two devices connected together, like your smartphone and bluetooth headphones, through to  networks with millions of nodes (a device connected to a network) connecting to each other through a series of interconnected networks, like the internet.

Bandwidth is the amount of data that can be transmitted across a network. Higher bandwidth means more data can be moved at faster speeds. In the early days of the internet it was common to connect using dial up modems which had very low bandwidth – you could spend an entire day trying to download one song (and then lose it part way through if someone went and picked up the phone). With the move to broadband (wide bandwidth = fast transfer speeds) connections, we’re now able to stream 4k videos to our devices almost instantly.

Wired Networks & Ethernet

We can split networks into wired and wireless. On a wired network each device is connected by physical cables (the “wire” in wired). Smaller networks, like you’d find in a home or office are called LAN (Local Area Network). These networks often use a technology called ethernet, which uses cables with RJ-45 connectors to connect the various devices together. For longer distances (some network cables are literally crossing oceans) the cables are made using ultra high bandwidth fiber optic cables that can transfer huge amounts of data at close to the speed of light.

Wired systems are capable of transferring data at very high speeds but they do have one fairly obvious downside – devices need to be plugged in using a wire to connect to the network. This isn’t exactly ideal for our modern portable devices (it would be hard for Apple to sell many Watches if they needed to be plugged into an ethernet cable to work!).

That’s where the wireless networks come in.

Wireless Networks, WiFi

You’re probably familiar with the name Wi-Fi. While many people use the name to refer to any kind of wireless connection, Wi-Fi is actually the name for a set of standards that work for wireless local area networks (WLAN) – basically wireless networks across relatively short distances. Modern Wi-Fi is fairly high bandwidth, so it’s often used in places like homes, offices and universities to supply connections to laptops, smartphones, tablets, and smart TVs.

Because WiFi networks are local area networks, they are usually connected via a wired connection to the internet itself (so they are basically taking a wired internet connection and making it wireless within a small area). WiFi networks usually operate independently of each other – if you’ve ever opened the “WiFi” settings on your phone or computer and seen a long list of available access points then you’ll probably already realize this. You can’t simply connect to the nearest WiFi access point automatically and then switch between all the other networks; you’ll need to know the WiFi password to gain access to most networks.

If the network needs to cover longer distances where Wi-Fi won’t work, then we may turn to mobile networks (like we use for our smartphones). While early mobile networks were painfully slow, modern networks use newer technologies like 4G (4th Generation broadband) or 5G (5th generation broadband) to offer very high transfer speeds to their customers. In fact, it’s becoming fairly common for people to have a 5G mobile connection that has more bandwidth than the wired connection from their broadband provider at home.

Finally, for really small networks – PAN or Personal Area Networks – we have standards like Bluetooth. Bluetooth has been specially designed to operate at shorter distances while using much less power than Wi-Fi or a mobile radio. This makes it a great choice for connecting personal devices together, like your wireless headphones with your phone, or your remote control and TV. But the limited bandwidth means Bluetooth isn’t a great choice when you need to transfer larger amounts of data, which is why some portable devices like DSLR cameras will still turn to WiFi when they need to move large data, like 4k videos.

Next steps

You’ve made it through the introduction to hardware, well done! I recommend continuing the Technology 101 series with my introductions to software and the internet.

Leave a Reply

Your email address will not be published. Required fields are marked *