Search engines are one of the main channels that can bring people to your company’s products or services, so even if you’re not involved in the marketing side of the business it’s still very helpful to know how they work. By the end of this guide you’ll have a solid grasp of the fundamentals of Search Engine Optimization (SEO), and understand why businesses are willing to spend significant amounts of time and money on SEO.
We’ll be covering:
- What is SEO, and why should you care?
- Types of web traffic
- How SEO works
- The rise of Google search
- The role of professional SEOs
- Keyword research, backlinks, meta tags
And much more.
Let’s get started.
What is SEO, and why should you care?
Imagine for a moment that I scratched the side of my car (purely theoretical of course, I would never be so careless). I need to find someone who can repair my little accident so I open up Google and search for “car panel scratch repair near me”.
Less than one second later Google has returned a page of results, known in the industry as a Search Engine Results Page, or SERP. This SERP contains an ordered list of results for businesses that can potentially help me fix my car. The ranking order of the list is hugely important to the companies on it as over 25% of people click the first result on the SERP, and that jumps to 55% for the first 3 results. If more than half of a website’s potential web traffic are clicking the first 3 results then it’s easy to see how ranking highly in search can make or break many businesses. This raises a multi-million dollar question – how do websites get to the number 1 spot on the results page? To answer this, we enter the world of Search Engine Optimization, or SEO.
The term SEO describes both the process, and the actual job role, of trying to get website content to rank higher on search results. Since Google is the largest search engine on the web most SEO work is focused there, but you’ll also find SEOs working to optimize written content, videos and listings on sites like Youtube and Amazon.
Traffic that comes to a site by clicking on a (non-advertising) link on a search results page is known as “organic” traffic, as it has been sent to the website for “free”. If you visit a site by typing the web address into the browser and hitting enter then it is considered “direct” traffic, because you have come directly to the site. If you visit a site via clicking a shared link, for example from a post on Reddit, it is considered “referred” traffic because you were referred to the site by whoever shared the link. Most businesses consider organic traffic to be the most important source of web traffic, so we put a lot of resources into trying to increase the amount of organic traffic to our sites.
But why is organic traffic so valuable?
Ranking highly in a search engine for valuable keywords (don’t worry, I’ll explain these in more detail shortly) is a low cost way to bring traffic to a website. Organic traffic is not free – there will always be some cost through a combination of the time and money spent creating a site’s content or doing SEO work to rank highly. But the marginal cost of each additional website visitor you receive from organic traffic is effectively zero. By comparison, if we were running a paid search advertising campaign we could easily be paying more than $5 per click every time we bring someone to our site, and while the costs for advertising on social networks like Instagram or TikTok are usually a little lower they are still in the same ballpark. Dollars per click vs effectively free – it’s easy to see why ranking #1 in search is considered the holy grail for many businesses.
I realize you probably don’t want to work in SEO so you may be asking why you should care about any of this. And that’s a good question.
Increasing the volume of quality web traffic to your site provides the potential for more users, customers, revenue and profits. And ultimately, these are extremely important metrics for growing any tech business. It’s also compounding – a company that ranks at the top of the SERPs for their primary keywords can enter into a positive feedback loop where they benefit financially from the web traffic, which means they can hire more people and spend more on marketing and sales, which brings in more revenue, which they can spend on more hiring marketing, sales and so on.
And importantly, good SEO is a team effort. While the actual day-to-day work might fall upon an SEO specialist or an outside SEO agency, effective SEO also requires people to create content and implement technical changes and often the best people to do these tasks will be within the company. For example, if you’re part of the content team writing blog posts then knowledge of SEO will help your posts perform better at their goal of bringing in new users/customers. If you’re in the product team and understand the importance of SEO then you’ll be able to make a stronger case for why you should be devoting some of the company’s limited engineering resources to implementing the technical SEO requirements that Google favors.
How does SEO work?
To understand how SEO works today it’s helpful to know how the search industry has developed over time. Let’s jump into our time machine and take a quick look back at the history of web search.
The web exploded in popularity in the early nineties and it didn’t take long for people to seek out ways to find content among the avalanche that was being created. The first attempts to solve this were hand-selected directories that contained lists of the “best” websites, but these quickly struggled to keep up with the huge influx of new sites that were being added every day. With so much demand it was only natural that more sophisticated options would appear.
The first “all text” based search engines were launched in 1994. If you’re as old as me then you might remember names like “Lycos” and “WebCrawler”. These search engines (and their modern counterparts) used special programs called “spiders” to “crawl” the web. A spider visits a web page and takes a snapshot of all of the text on the page. It then follows any links it finds on the page and continues the crawling process on those pages, before following the links it finds on those pages, and so on, eventually crawling its way across the publicly available web. The data from all these page snapshots is then used to build a giant index that can be used for search queries. When a user enters a search query the search engine combs through the index for web pages that contain similar keywords.If I was searching for “car repair” then an early search engine would check for pages that contain the text “car repair” within the page and then return any results.
But these early “all text” search engines had big problems. One of the main signals the search engines used to determine a site’s relevance to a search query was “keyword density” – basically, how many times the keyword is used on the page. If you’re searching for “dog chew toys” then a website devoted to dog chew toys should naturally use that keyword a few times within their content, so you’d expect that page to show up near the top of the search results. Unfortunately, that wasn’t always the case. As so often happens when large amounts of money is on the line, some unscrupulous people realized that they could “keyword stuff”, adding irrelevant (but valuable) words to their own webpages to try and trick search engines and gain more traffic. Early search engines were not sophisticated enough to analyze the sentiment of the page so they would often fall prey to tricks like keyword stuffing, leading to a sharp decline in the quality of their search results.
Then Google entered the picture. Founded in late 1998 by Sergey Brin and Larry Page, Google’s search engine still crawled and indexed the web, but used a powerful new algorithm called Page Rank (PR) to provide users with significantly better search results. While existing search engines were returning results based on the text on a web page, Google was also taking into account how many external pages were linking to a web page, and then giving each page a “rank” based on the perceived quality of these external links. If two sites with similar keywords were compared, a page with a few links from highly ranked pages would be considered higher quality than a site with dozens of links from low quality sources, so the higher-quality page would display further up the search results page. The PageRank algorithm was a massive improvement and removed many of the spam problems that were plaguing existing search engines, leading to much more accurate search results. Users flocked to Google, and the company quickly came to dominate the web search engine market.
Fast-forward to today. Has anything changed? In short, yes. Modern web search has evolved and Google now relies on a series of sophisticated algorithms and machine learning systems to provide us with highly accurate search results. These algorithms are influenced by a huge number of factors including the words used in the search query, your location, your previous search history, the context of the query, the relevance and usability of web pages, and many, many more.
Search results pages have also become much more personalized to the end-user. If I pull out my phone and search for “cafes near me” I will see a different result’s page than someone located on the other side of the same city. The results will also change based on the time of day – if I’m searching at 6:30am then the top results will be cafes that open early (a short list where I live). In a similar way, if you search for a topic that is currently in the news then Google’s algorithms are smart enough to understand this and present you with “fresh” results containing links to up-to-the-minute news articles.
Google’s powerful machine learning algorithms are constantly monitoring the behavior of their search users to further optimize the search experience, with the goal of constantly improving the search experience. But Google is not doing all of this work for altruistic reasons – highly accurate search results bring in users, and this makes Google SERPs one of the most valuable pieces of real-estate on the web for advertising. As you’ve no doubt noticed, when you perform a search on search engines like Google, Amazon or Youtube, many of the results that appear are actually paid advertisements. This is known as Search Engine Marketing (SEM) and it is a huge topic in itself, but for now just know that it’s incredibly lucrative and easily covers the cost of supplying “free” search to users – Google’s ad revenue in 2021 was $209B. Yes, that’s 200 billion, with a b.
The reason I’ve been explaining all of this in a guide to SEO is because I want you to understand that attempting to manipulate Google’s search results is a losing business. There are literally hundreds of (if not more) ranking factors that determine what shows up in our search results. We can’t simply add a few keywords to a website and expect to rank number 1 two weeks later. So what can SEOs do?
What do SEO professionals do?
It’s very tough to trick the engineers at Google. What we can do is gently influence the results by making sure Google (and other relevant search engines) can accurately understand the purpose of our websites and content, and considers our content to be relevant and well constructed.
Let’s take a look at some of the core SEO tasks people use to reach these goals.
The value of our search engine traffic will be determined by the keywords that we are ranking highly for. We perform Keyword research to try and find and analyze the most relevant search term keywords for a particular website.
SEOs conduct keyword research because all keywords are not created equal – they differ based on factors like relevance, intent and search volume. We want to focus our attention on keywords that have the best combination of these factors for our particular needs.
Some search terms are more popular than others so their keywords will have greater search volume, usually measured as monthly search volume (MSV). It is generally easier to rank highly for keywords with lower search volume as competing websites won’t focus much attention on ranking for them. But less search volume = less traffic, and there isn’t much benefit to ranking #1 for a keyword if no one ever searches for it, even if it is easier than ranking for a keyword with greater search volume.
We can also break down most keywords into 4 buckets based on their intent:
- Informational keywords – this is when a searcher is looking for an answer to a specific question, like “population Stockholm”
- Navigational keywords – a searcher is looking for a specific website or page, like “airbnb” or “nytimes”
- Commercial keywords – the searcher is investigating different services and brands, like “running shoe reviews”
- Transactional keywords – the searcher is looking to perform an action or complete a purchase, like “buy fishing rod”
Professional SEOs will target different types of keywords based on the stage of the marketing funnel they are focusing on. Towards the top of the funnel where the goal is awareness they will generally target informational or commercial keywords with large search volumes to try and bring in large amounts of traffic. The vast majority of this traffic will not be ready to make a purchase or use your service but there are still benefits to exposing them to your website and brand. For example, you can potentially get them to sign up for your newsletter which will give you the opportunity to continue marketing to them in the future.Building Backlinks
There are many powerful SEO tools that help with keyword research, offering users information like search traffic volume and related keywords. Some of the popular tools are Ahrefs and SEMrush. Most SEOs will also export data from these tools to analyze further in an Excel or Sheets spreadsheet.
Once the keyword research has been completed we can use the findings to target specific keywords throughout our website and blog content. As an example, if our company provides a project management SaaS tool then we can make sure we are mentioning relevant keywords like “project management software” and “best project management tool” throughout our marketing website and blog posts. Again, it’s worth mentioning that Google is very smart so it’s not a good idea to just throw a bunch of random keywords into every page – they need to be relevant to the page content.
I mentioned earlier that one of the key developments that Google bought to search was their PageRank algorithm that ranked web pages based on the quality of sites that linked to them. While PageRank itself has been superseded by new, improved algorithms, the basic idea still remains – leading web search engines favor sites that have a large number of external links from high quality sites. SEOs are well aware of this so one of their main focuses is link building, where they try to increase the number of high-quality backlinks (when an external website links to your site) to increase their page’s rank value.
Some link building takes place naturally, like when a site is linked from a news article, or with most of the links you’ll find throughout this site – I’ve simply added links that I think you will find useful, and as a result the linked website gains an additional backlink. No transaction has taken place. But building backlinks is also a manual task for SEOs, through activities like guest posting, where someone writes a blog post on another person/company’s site in return for a link to their own site, or simply reaching out and asking people to add a link to your website.
Most of the leading SEO tools use the concept of Domain Authority (DA) to provide each website with a score ranging from 0-99, based on how well the site should appear within search results (a site with DA of 50 would generally rank higher than a new site with DA of 5). The number of quality backlinks a site has pointing to it will be a significant component to that site’s DA score. When SEOs are building backlinks to their site they will usually try to get links from sites with high DA, as these links are generally “worth more” than a link from a site with low domain authority.
Meta Tags are small snippets of code within a web page that provide search engines with context about the page. Most of this information is not intended to be viewed by website visitors so the meta tags are generally found in the <head> section of the web page, rather than the <body> section that contains viewable content.
If you open a web page and click on the “view source” option in your browser then you should be able to see the meta tags somewhere towards the top of the HTML code.
Let’s quickly cover the main meta tags.
The title tag is possibly the most important meta tag. Not only does it provide our web browser with a page title that can be displayed in the window or tab, but the title will also show up as a clickable link in search results pages and when our page is shared on social media.
A strong title tag contains keywords that are relevant to the page, but is also written in a way that encourages users to click on the link if they are viewing the title from a search result or shared social post. Google’s search results page will display approximately 60 characters from a title so it’s important to keep them short, but also relevant and enticing to potential visitors – not an easy combination!
The meta description tag is also found within the <head> section of a webpage’s source code and is intended to provide additional context about the purpose of the page. The meta description text is displayed underneath the title on a SERP, so SEOs will often write the description in a way that complements the title to try and encourage users to click their link over the other links in the results page.
The heading tags are found within the page’s main <body> content and are used to define headings and subheadings. Each page should have one primary heading, the H1, and can (and for usability reasons, generally should) have multiple subheadings throughout the content. Google says that headings are not used as a ranking factor, but correctly organizing your page content around heading will make the page easier for search engines to understand, and provides page visitors with a better reading experience (no one likes to read long walls of text).
Image ALT attributes
The image tag <img> that we use to display images on a web page also has an “alt” attribute where we can write a short text description of the image. This description lets us provide context about an image, so for example an image of a dog could have the attribute image attribute “Small brown dog”.
This context is important for two reasons. First, it has traditionally been difficult for search engine spiders to understand the content of images within a web page (although with the rapid developments in image classification machine learning models this is becoming less of an issue). Secondly, the alt attribute improves the usability of the site for people with impaired vision who may otherwise struggle to view an image – their web browsers and screen reader tools can simply read the descriptive text to them.
The goal of technical SEO is to optimize our websites and servers to meet the standards and requirements set down by search engines. If our site correctly follows these standards then search engine spiders can crawl our websites more effectively, and this should help improve our organic rankings. After all, we will struggle to get traffic if Google can’t accurately index our websites!
There are many components that make up technical SEO, so we’ll just focus on some of the key ones.
Fast and Mobile friendly pages (Page speed)
Google wants the websites their users view to load quickly and look good on mobile devices, so they punish poorly performing sites with worse rankings. Making a page load quickly may sound like a simple problem, but it can be quite challenging for developers to turn the websites and applications we know and love into highly optimized, blazing fast experiences.
Secure (SSL, HTTPS etc)
It’s important that the websites and web applications we use are secure, and any important data is encrypted to prevent anyone untoward from accessing it. Google understands this and has been pushing for all websites to actively take measures to become more secure. One way this is done is via Secure Socket Layers or SSL, a cryptographic protocol that secures network communications. You can see that a website is using SSL (or TLS, the updated, more secure version of SSL) if the URL uses the HTTPS protocol rather than HTTP (so https://secure.com is secure, while http://notsecure.com is not). A secure site will also display a small padlock icon to the left of the URL within your browser.
XML Sitemaps are special files that tell search engines about all of the pages on a website that are available to be crawled – basically a map of a single site (hence the name). Websites are not required to have a sitemap as Google’s spiders can generally work their way through and find any live pages, but they are still a useful way to ensure Google properly understands the page structure of a website.
Structured data are standardized data formats that provide information about a page so Google can correctly classify the page’s contents. You can see good examples of structured data when you do a web search for “cafe near me”, where Google will pull in structured data from cafe websites to use to display useful information, like each cafe’s opening and closing times.
After performing keyword research, building backlinks, creating content, implementing technical SEO changes, and many other tasks, an SEO professional needs to perform analysis to see whether their work is having the desired effect and increasing the site’s organic traffic.
There is a whole ecosystem of tools that have been developed to help SEOs analyze their performance, but some key ones include:
- Google Analytics: an analytics tool used by most websites that helps you track the number of visitors to a website (along with many other features). SEOs use GA to make sure that organic traffic is increasing over time.
- Google Search Console: A special tool that website managers use to provide Google with site specific information, like sitemaps.
- Tools like SEMrush or SE Ranking to track their site’s current rank for a list of target keywords.
- SEO tools like Ahrefs or Mangools for keyword research, site audits and many other tasks.
- Technical SEO tools like Screaming Frog to check for any issues with the site’s technical structure.
- Spreadsheet tools like Microsoft Excel or Google Sheets to create data dashboards and perform more in-depth analysis of exported data.
White and Black hats
You may hear some people referring to SEO work as “white hat” or “black hat”, terms that come from the world of computer hacking. White hat refers to generally acceptable SEO practices, while black hat practices intend to manipulate search results and are expressly forbidden by Google (if not outright illegal).
Examples of black hack tactics include activities like keyword stuffing, using hidden text (eg. white text on white backgrounds), or negative SEO, where unethical SEOs attempt to perform rule-breaking tactics on competing sites with the hope that Google will punish their competitor.
You should never use black hat tactics, however appealing they may be. Google has very strong punishments for sites that break the rules, which include de-indexing a site, where your site is effectively removed completely from search. If an SEO company approaches you promising results that seem too good to be true there is a chance they may be willing to try black hat tactics. Take that as a red flag and stick safely within the rules.
How can I learn more about SEO?
If you have a website or blog you’d like to optimize or are simply interested in learning more about SEO then make sure to check out the resources below.
SEO News and Blogs
Directly from Google
Useful SEO tools