Unless you live on another planet, you'll use Google every day. You know, you open the app, type what you're interested in into the search bar, and, voilà!, thousands or millions of results you can click on. Now, those results aren't what they are by chance, magic, or divine intervention: they're there thanks to the SEO, or organic positioning, work behind them.
If that acronym sounds like gibberish to you, this article will explain what SEO or organic positioning is in the simplest way possible.
What is SEO?
SEO is an acronym that stands for search engine optimization. This short but technical definition may leave you feeling like you were, so let's give a longer but clearer one:
SEO refers to all the actions carried out both on and off a website to ensure it appears in the best possible position in free search engine results. These results include both those that appear below ads and those that appear on Google Maps with specific business locations ( local SEO ).
Better? Be patient, we'll talk about those "actions" later.
What's important is to differentiate from the start between these free results (SEO) and paid results (PPC), which correspond to advertising campaigns run on Google . In short, SEO is like opening a store on the busiest avenue in the city, and PPC is also paying for a banner advertising it right at the entrance to the avenue.
With that definition in mind, the next step is to explain the basic concepts that SEO experts use every day, terms we'll use throughout the article.
SEO Basics
Search engines
We mentioned them in the technical definition of SEO. A search engine is a computer system that collects and stores information on web pages. So, when you search for something in the engine, it will display the results that best match your search.
The most widely used in Spain and Latin America is Google, but there are others, such as Bing, Yandex, Yahoo, and Baidu.
URL or “web page”
Commonly, "website" is used to refer to both a specific page and a domain, and it doesn't matter much. But when it does matter, what we have is a domain made up of different URLs. For example, www.domain.com is made up of URLs like www.domain.com/url-1, www.domain.com/url-2, etc.
Imagine a book: the book would be the domain, and each page would be a URL. To avoid acronyms, we'll use URL and page interchangeably in this post (but we will use domain).
Crawlers, spiders or bots
Crawlers, also known as spiders or bots, are computer programs that access and read the information contained in URLs, and do so to classify that information into an index that will then be displayed to the user.
Like someone who owns a bookstore, he reads all the books at supersonic speed to find out what they're about and arranges them on the shelves according to the most searched for or purchased.
Sitemap and robots
A sitemap is, literally, a map of your domain. It's a file that helps crawlers by showing them an overview of the domain's most important pages, also known as URLs. Having a sitemap isn't mandatory, but it's highly recommended for domains with a large number of URLs. Think of a tourist map: it will include only the most interesting places, not those you have to skip.
The robots.txt file, on the other hand, is used to block crawlers from accessing certain URLs. What do you do when you have guests at home and don't want them to enter a room? Lock the door, right? Well, this is the same thing: lock the URLs you don't want bots to see.
Indexing and link tracking
These concepts are related to the robots.
Indexing refers to a URL appearing on a search engine results page. For this to be possible, the crawler must have access to the content on that page and be able to make it public. For example, you give the guest permission to enter the room AND you also give them permission to tell everyone what they saw inside (which shouldn't be a bad thing...).
It's also possible to allow it to enter but not count anything; that is, the crawler scans the content but doesn't index it or show it in the results. This case, although less common, is sometimes necessary, especially for URLs with legal/administrative content: cookies, privacy, shipping, returns, etc.
Now, a link is a door that connects two rooms, two pages. We can prevent the crawler from detecting that door; from opening it, passing through it, and leaving it open, and from opening it, passing through it, and closing it.
Tracking budget
Like humans, crawlers have little patience, and they won't always have time to visit every room in the house, even with our permission. A crawl budget is the time a crawler spends scanning the content of a page, so we need to make things easier for them and remove any obstacles they might encounter along the way.
Loading time
In organic search engine optimization, being the fastest doesn't always guarantee you'll reach the top spot, but it's one of a domain's best qualities (among many others). In Google's case, it's recommended that the domain should take no more than 2-3 seconds to display the first content to the user.
Keywords and search intent
Keywords, AKA keywords in an SEO agency , are the terms that the user enters in the search bar, and can be a single word or an entire sentence with its subject and predicate (widely used for blog posts, such as "What is SEO and how does it work").
For example, we have a used car dealership that we want to rank on Google. We'll need to target the keyword "used cars" by placing it in very specific places on our homepage or catalog: in the title, in the description, in the product names, etc.
Search intent is to offer the most targeted content possible based on the user's query. Going back to the previous keyword, someone searching for "used cars" wants to buy a used car. They won't be interested in results that are far from their intent, such as a page that sells used cars, a post about who made the first car, or other content that doesn't satisfy their search.
Keyword research
Keyword research, AKA keyword research , consists of determining the most interesting keywords for the niche in which we operate.
Each keyword has a monthly search volume and difficulty level, information that Google positioning agencies obtain using specific software tools. The difficulty level is not necessarily proportional to the search volume, but the more difficult the keyword, the more SEO efforts will be required.
If we had a used car dealership, the keywords we'd be interested in would be "used cars," "cheap cars," "car sales," etc., but never keywords that include "motorcycle."
Web architecture
Can you imagine walking into a supermarket and finding products scattered all over the place? Dairy products mixed with fruit, meat mixed with fish, cereals mixed with beauty products... Chaos.
The same thing happens with a domain: content should be organized according to a logical hierarchical structure, starting with a top menu and going down to the pages we want to index. Ideally, there should be no more than three clicks from the moment we enter the supermarket until we check out.
Remember: Mixing leads to clutter, clutter leads to rejection, and rejection leads to the dark side of Google, beyond page 10 of results.
Duplicate content and “thin content”
Just as it doesn't like clutter, Google doesn't like duplicate content either. Note that duplicate content refers to both identical and similar content.
Thin content is simply content with no value for the user. In other words, empty pages with little text, bordering on useless. If you don't find them useful, Google won't either.
Cannibalizations
We talk about cannibalization when two or more URLs compete for the same keyword by presenting similar content. Cannibalization causes search engines to be unsure which page to display, so they display all of them in different positions.
Domain Authority
Domain Authority (DA) is one of the most important metrics for organic ranking. It's a value from 0 to 100 that measures the popularity, relevance, or importance of a domain in Google's eyes.
In real life, the most popular, relevant, or important person is the one with the most connections. The same thing happens in SEO: domain authority is gained through links from other domains that link to ours.
Google doesn't publish domain authority, but it can be estimated using SEO tools like SemRush or Ahrefs.
On a scale from 0 to 100, how much domain authority would you say Wikipedia has?
On-page SEO and Off-page SEO
You may remember that at the beginning of the post we said that the SEO budget encompasses actions performed both on and off the domain.
Well, the actions performed within it comprise On-page SEO, such as creating the sitemap and robots file, hierarchical organization of information, inserting keywords, and a long list of actions, both technical and content-related.
On the other side of the coin is off-page SEO, which refers to actions performed outside of our website, such as, specifically, placing links on other domains to increase our website's organic reputation (DA).
Google Search Console
Google Search Console (GSC) is Google's official tool for checking (almost) all the SEO information related to a domain. Why do we say "almost"? Let's be honest:
Google is still a company, and as such, it wants to make money. However, SEO results are free; Google doesn't earn any revenue from them. How many companies explain everything when it comes to a service they offer for free?
That said, in Google Search Console we can see data on page performance, experience and usability, errors and warnings, links, etc. Regarding performance, it's important to understand the following metrics:
- Impressions: The number of times users have seen a URL on the Google results page.
- Clicks: Number of times users have clicked on our URL.
- CTR: percentage that reflects the number of clicks compared to the number of impressions
- Average Position: The average position of the URL on the results page.
However, a professional SEO agency also works with SEO tools outside of Google to view information that Google Search Engine Optimization doesn't offer, such as keyword search volume and difficulty, position changes on the results page, estimated DA, etc.
We've mentioned SemRush and Ahrefs before, but there are others like DinoRANK, Sistrix, and Übersuggest. Not all of them are suitable for everything, so you have to choose the most appropriate one for the SEO strategy you want to develop.
How does organic positioning work?
Algorithm
Coca-Cola will never make the formula for its recipe public, nor will Google make public how its algorithm works.
This secrecy is logical, because if everyone knew exactly how SEO works, there would be no competition: every page would be in the top position (although visually it's impossible, of course). It's as if we could all make Coca-Cola at home, for free.
Like other digital channels, Google SEO works with a constantly changing algorithm, from small, inconsequential updates to massive updates capable of sinking a domain into oblivion (it's happened, it's happened, and it will continue to happen).
Most important factors
Although not all of them, through trial and error, SEO professionals have managed to identify over the years some of the factors that influence a page's ranking.
The ones that must be kept in mind as if they were the 10 commandments of SEO are:
- Domain Authority: A metric that measures the popularity of a website, gained through links from other domains.
- Website security (SSL protocol): Data encryption protocol that ensures a secure connection between the user's computer and the server.
- Loading speed: The time it takes from the moment the user's computer requests content from the server until it is displayed in their browser. The maximum recommended time is 3 seconds.
- CTR: Percentage of clicks based on the number of impressions. If a page is in position 1 but no one clicks on it, Google will demote it because it considers it uninteresting.
- Mobile usability: content adapted to the screen size of mobile devices (readability, spacing, etc.).
- On-page user experience: user interaction with the page content. Videos often work better than images.
- Valuable content: relevant, useful, and in-depth information for the user based on their search intent.
- Internal structure of the page: hierarchical ordering of the content within the same web page (HTML).
- Page dwell time: The time a user spends viewing the content of a URL. The higher the value of the content, the longer the dwell time.
- Engagement rate: Percentage of users who, from one page of the domain, have reached another through a link.
It's believed that Google's algorithm takes into account up to 200 factors when ranking a page. And while no one knows them all, Google provides "clues" about them, so you should always be informed about what they're doing in Mountain View to analyze and correct the impact that algorithmic changes have on your results.
User query and results
Aside from algorithms and factors, SEO works as follows.
When a search engine spider accesses a public URL, it crawls it, detects the keywords, makes a copy of the content, and stores it in the index. It not only detects the content of a single page, but, thanks to the links, the spider jumps from one page to another, crawling them all and gaining an idea of the domain's content.
If the content in question is indexable, Google will rank it in a specific position for each keyword detected. Depending on the level of optimization for a keyword, the page will rank higher or lower in the SERP (the page where Google search results appear). Let's take an example:
We have a page where we discuss the benefits of compression stockings, focusing on the keyword "benefits of compression stockings." In this post, we'll discuss what compression stockings are, how many types there are, what they're used for, how to put them on, and how to care for them.
These stockings are garments worn to improve circulation in the legs, to prevent swelling, itching, the appearance of varicose veins, and some conditions that impair blood circulation. Therefore, it's to be expected that terms such as "swollen legs," "swollen ankles," "blood circulation," "muscle fatigue," "varicose veins," "diseases that affect circulation," etc., will naturally appear in our article.
Our main keyword is "benefits of compression stockings," and if we have good content explaining the benefits of using them, we'll appear in first position when someone searches for it. But we'll also occupy other positions (higher, equal, or lower) for secondary keywords worked indirectly, since they are semantically related to the main keyword.
How to do good SEO
Before you buckle down and "fight" the algorithm, you need to know that organic positioning is a long-term endeavor. It's not about publishing a post on social media, clicking the button, and there it is, first in your feed. There are no magic buttons in SEO. It's not a sprint, it's a long-distance race.
To achieve this goal, an SEO strategy is divided into three pillars: technical actions, content actions, and link building actions. Because they require different knowledge and skills, it's common for an SEO agency to have three departments, one for each type of action. In fact, it's the most optimal way to work.
What begins well ends well, so the first step is to conduct an SEO audit to determine the current state of the website we're working on. This will allow us to establish a roadmap with all the actions to be performed, whether technical, content, link-based, or all of the above.
Once you've identified SEO errors and opportunities through the audit, the next step is to determine which keywords you want to rank for on Google. If the website already existed, it's likely to rank for some keywords, but you'll need to check whether they're the correct ones (search intent).
For example, if it's a cosmetics store, you'll need to work on keywords that refer to products, such as "creams," "exfoliants," "sunscreens," "shower gels," "conditioners," and other products that you'd find in a physical cosmetics store.
However, if you're a digital marketing agency looking to work on your blog, we'll focus on keywords like "what is SEO," " types of social media ," "inbound marketing," "how to create a blog," "what is a CMS"… Topics that will help you position yourself as a leader in the industry.
If we know that web positioning is long-term, we know where we're starting from, and we know where we're going... Now the fun begins! 🙂
Technical SEO
Technical SEO, which is part of On-page SEO, encompasses all actions aimed at improving URL crawling and indexing. Although these actions are generally invisible to the user, crawlers do detect them, hence their importance.
Ensure good loading time
Several factors influence a website's loading speed: the user's device and connection speed (although currently slower), the size of the files that make up the website, and the location of the server and hosting.
We need to make sure our website loads within 3 seconds at most, as this is one of the main factors affecting SEO. We can check speed with tools like GTmetrix, WebPageTest, and PageSpeed Insights.
The actions that will allow us to improve loading speed are:
- Reconsider migrating to a hosting that offers superior features.
- Use a content delivery network (CDN) so that the data needed to load a site is hosted on multiple servers. CDNs choose the server closest to the user to increase the speed of content delivery, rather than having everything hosted on a single server.
- Reduce image size to keep them under 100 kb. One of the most common mistakes is uploading images without first compressing them, often resulting in images that weigh up to several megabytes. There are many free online tools that reduce image size without compromising quality, and they work with the most common formats, such as JPG and PNG.
- Remove links to redirected pages. A redirect is a gateway (link) that connects a URL that no longer exists to another that does. What affects loading speed and crawl budget is when there are gateways leading to the URL that no longer exists, as the crawler has to make the "jump" to the valid URL. It's necessary to remove all links to URLs that no longer exist and replace them with the correct pages.
- Prioritize caching and asynchronous loading. Caching temporarily stores data, a preloaded version of the website that is delivered to the user much faster than the standard version. Asynchronous loading, on the other hand, allows elements such as text to load first, leaving CSS and JavaScript elements that take longer to load for last.
- Minify code. Sometimes HTML code (the language in which websites are built) includes too much whitespace or line breaks. This only slows down the crawler, which must continue to move forward until it finds the next written part of the code.
Most websites are built with CMSs, content management systems that allow you to create a website without technical knowledge. There are predefined ones, such as WordPress, Shopify, or Prestashop, as well as custom-made CMSs.
Depending on the CMS we use, many of the actions to improve loading speed can be done through plugins. But be careful not to overload your CMS with plugins, as too many also affect loading speed. That plugin installed by your friend who dabbled in IT and that you no longer use? You're about to delete it.
Create the sitemap.xml and robots.txt files
Sitemap.xml
The sitemap.xml is the file that contains the domain's main URLs. This file's purpose is to facilitate website crawling, with the resulting SEO benefits. Although creating one is not mandatory, it's highly recommended for domains with a large number of URLs, such as online stores.
There are several ways to create this file: using a plugin that allows you to generate it (depending on the CMS), using a free or paid tool, or doing it manually using a text editor (the most laborious and least recommended option).
The sitemap must have an XML extension and be hosted in the domain root: www.domain.com/sitemap.xml. Once created, its path must be entered in Google Search Console.
It's important not to confuse sitemap.xml and sitemap.html. The former is specifically for search engine crawlers, while the latter, although no longer used, is a map that shows the user the domain's most important URLs.
Robots.txt
The robots.txt file is used to tell crawlers which URLs they can and can't visit. In fact, it's the first file the bot will look for when it enters the domain.
This file is used to optimize crawl budget, prevent access to duplicate and private pages, and hide resources that aren't SEO-friendly, such as admin files, PDFs, etc. If we don't have this file, any crawler can access any URL on the domain (every room in our house).
The robots.txt has a very simple syntax, consisting of the crawler name (User-agent), a rule (Allow/Disallow) and the lines with the URLs allowed or blocked for that crawler.
If we prevent crawling of a URL, the bot won't be able to access any URLs that depend on it. Imagine three train cars: ABC. If we uncouple car B, the bot won't be able to access car C unless we specifically tell it to.
Let's look at some examples of robots.txt:
Block all trackers from accessing the entire web
User-agent: *
Disallow: /
In these lines, the (*) indicates that the instruction applies to all crawlers. The (Disallow: /) indicates that we're blocking all URLs on the website. When do we want to prevent crawling of the entire website? For example, when we're making major changes to it, although this isn't the best practice for SEO.
Block access to only one bot
User-agent: *
Allow: /
User-agent: Googlebot
Disallow: /
We indicate that all bots (*) can crawl the entire website (Allow: /), but we specify that Google's bot (Googlebot) does not have permission. When should we prevent Google from crawling our website? Well, as a Google positioning agency, we don't recommend it, since Google is the most used search engine in both Spain and Latin America.
Block access to specific pages
User-agent: *
Disallow: /page-1
Disallow: /page-2
Disallow: /page 3
In this robots.txt file, we've blocked all bots from accessing pages 1, 2, and 3 of our website. They'll be able to access the rest without any problems.
Block access to a page, but allow access to pages dependent on it
User-agent: Google-bot
Disallow: /blog
Allow: /blog/what-is-seo
Allow: /blog/how-to-do-seo
In this example, we prevent the Google bot from accessing our blog page and, therefore, all the posts linked to that URL. However, it does have permission to access those posts, and only those two specific ones: "What is SEO?" and "How to do SEO."
How do you know which pages to block in robots.txt? Cart pages, thank you pages, administrator or user resource pages, dynamic pages, and search engine pages are typical URLs that should be blocked with this file.
Like the sitemap.xml, there are plugins for creating the robots.txt file, although it's relatively easy to do so in a simple plain text editor. CMSs like WordPress and Shopify create a fairly comprehensive robots.txt file by default, although it never hurts to have a technical SEO expert review it.
Deindex URLs of legal pages
Robots.txt, as we've seen, prevents bots from crawling specific pages. There's an HTML command similar to robots.txt, which is used to prevent crawled pages from being indexed, typically for pages with legal content, cookies, submissions, etc.
These types of pages should be crawled, but they shouldn't be displayed in search results, as they aren't interesting to the user. Thanks to meta robots, we indicate that the search engine shouldn't include them in its index.
This HTML command has two instructions: "index," meaning to index the URL (the default), and "noindex," meaning to not index that URL even if it's crawled. It can be inserted manually into the page's source code, but there are also SEO plugins that allow you to add it in just a few clicks.
Link tracking
Remember when we talked about domain authority? That kind of "SEO strength" a URL has? Okay, now think of the domain (the homepage) as a glass, and domain authority as the liquid inside.
Thanks to links, the homepage's glass pours some of the liquid into another URL's glass, thus passing on some of its authority. In turn, the second URL will pass some of the liquid to another URL to which there is a link, and so on until the liquid is evenly distributed among all the glasses worth filling.
By default, a link will pass authority, but we can prevent this from happening using an HTML attribute: "nofollow." This attribute can refer to all the links in the URL, in which case it will be inserted into the meta-robots command. However, if we want only a specific link to not pass authority to the URL it points to, the "nofollow" attribute will be inserted into the HTML code indicating the presence of that link.
Do not follow any link in the URL
Here, we need to insert the "nofollow" attribute into the "robots" meta field of the URL. Taking into account the "index" and "noindex" instructions above, there are four possible combinations:
“index, follow”: indexing and links transmit authority.
“index, nofollow”: indexed, but links do not convey authority.
“noindex, follow”: no indexing, but links convey authority.
“noindex, nofollow”: neither is the link indexed nor do it transmit authority.
Do not follow a specific link in the URL
The meta robots will indicate that the URL is “follow,” and we will add the “nofollow” to the link in question as follows:
“As we explained in our post about <a href=“www.domain.com/blog/what-is-seo” rel=“nofollow”>what is SEO</a>…”
Thus, our post on “What is SEO” will not receive the authority of the page on which the link to it is inserted.
404 errors and 301 redirects
What do you feel when you click on a Google result and the page you're on doesn't exist? It's a frustration that makes you close it immediately. A page that doesn't exist (a 404 error in technical jargon) is detrimental to the user experience and, therefore, to SEO.
To find out which URLs on your domain are giving this type of error, you should consult Google Search Console, where a list of them will appear, and also use crawling tools that detect 404 errors on your website. When you detect them, you should redirect the 404 page to a correct URL and remove all links leading to it.
301 redirects, in their essence, occur when we change one URL to another. For example, www.domain.com/blog/what-is-seo -> www.domain.com/what-is-seo, where we removed "blog" from the original URL. It's very important to pay attention to the URL name, as two URLs may appear identical to the human eye, but not to crawlers. The slightest change in the URL name (a letter, a slash, etc.) converts it into a completely different URL, with all the implications that this entails.
For SEO purposes, it's about removing links that point to a 301 page—that is, to a page that redirects to another page—and placing them toward the correct URL, which is the one the 301 redirects to. The goal of this is to optimize crawl budget by eliminating the "jump" the crawler must make from the redirected URL to the final URL.
Mirror URLs and canonical URLs
We repeat: the SEO professional's eye knows how to read URLs carefully. We emphasize this when discussing the next aspect of technical SEO: mirror URLs and canonical URLs.
Mirror URLs are URLs that appear identical but are actually different. We're referring to small variations such as slashes (/) or even "www." For example:
www.domain.com/blog/what-is-seo
www.domain.com/blog/what-is-seo/
EITHER
www.domain.com/blog/what-is-seo
domain.com/blog/what-is-seo
Mirror URLs, like the one in this example, are a problem for SEO because they create duplicate content—two different URLs with identical content. There are three solutions to correct them:
- Delete one of them and redirect it to the correct one (the least recommended).
- Apply a “noindex” to the incorrect URL.
- Using canonical URLs
Canonical URLs tell crawlers which URL is the correct one, the URL they should follow and, therefore, index. This is an HTML attribute inserted in the header of the source code of the incorrect URL. Likewise, it must also be included in the correct URL (self-referencing canonicals).
In the example above:
If www.domain.com/blog/what-is-seo is the wrong URL, we need to tell the crawler that the correct one is www.domain.com/blog/what-is-seo/ by inserting the following attribute in the header of the first one: <link rel=“canonical” href= “www.domain.com/blog/what-is-seo/”>. This way, we are telling the crawler to ignore the URL without (/) and only pay attention to the URL with (/).
The good news is that CMSs have been avoiding mirror URLs by default and automatically inserting canonicals for years now. But, once again, it's always helpful to have the help of a technical SEO expert review this configuration.
“Mobile first” and usability
Just as CMSs often avoid the above error, they also adapt content to mobile formats. The days of having one version of the website for desktop and another for mobile are long gone.
However, depending on the CMS, the mobile version may have less readability, content that's wider than the screen, little spacing between buttons that prevents clicks, and other formatting issues that technical SEOs must address.
Remember that Google prioritizes the mobile version, so this format must be as good as new and highly usable, one of the factors that most influences SEO.
Content SEO
If you've made it this far, we'll move on to the more fun part: content SEO.
This part of SEO, which is also included in on-page organic positioning, focuses on the content that will appear in URLs, content focused on attracting users who search for results in search engines.
SEO content specialists aren't specialized in the technical side, although they do have the basic knowledge to perform certain tasks that straddle the technical and content aspects.
Keyword research
Keyword research is one of the pillars of content SEO (and SEO in general). As we mentioned at the beginning of this (extensive) article, it involves identifying the keywords for which we want to rank the domain, taking into account the niche, monthly search volume, difficulty, sales funnel depth, and (this is often forgotten) the amount of time we have available.
To identify interesting keywords and gain insights, it's best to use tools like SemRush. On the one hand, this tool tells you who the organic competitors are for the domain you want to rank for, as well as the keywords they're working on. On the other hand, it gives you very useful keyword suggestions based on your initial search, among many other features and data it offers (no, we're not commissioned).
For example, if you have an online women's footwear store and need to work on ecommerce SEO, your top keywords would be "women's shoes," "stilettos," "sandals," "wedges," "boots," "ballerina flats," and other types of women's footwear. Each of these keywords should be targeted on different URLs. Initially, "women's shoes" should be on the homepage, and the rest should be on collection or catalog pages.
It doesn't end there, because depending on the products, we may need to assign a keyword to each one, such as "red stilettos," "gold wedges," "black boots," "leopard-print ballet flats," depending on the characteristics that match the keyword (search intent).
Once we've identified the keywords we want to work on, we move on to the next phase: information architecture.
Information architecture
Information architecture is the way we present web content. Having a good architecture makes navigation easier for users and crawlers, who will access the different URLs through the links between them.
To implement an information architecture, or web architecture, you must follow a logical hierarchy, where general content will be broken down into levels leading to specific content. This content, which will appear on a specific page, will be focused on one of the keywords we selected in the previous phase. And we're referring to a keyword (including synonyms): it's not optimal to include two or more keywords on the same page; this would confuse Google and the user.
The dot in the center of the image is the home page. Several links lead from it to collection pages (Sandals, Stilettos, Wedges, Boots, Ballerinas, etc.). Each collection leads to more links to the products it contains.
From the home page, there are also links to other URLs that aren't collections or products. We're talking about the typical Contact page and the Blog page, from which the links to each post originate.
The optimal path for good organic ranking is to have no more than three clicks from the home page to reach a product or blog post. This is what SEO experts call "depth," so you have to make it easy for both the user and the crawler to find their footing and not get bogged down in a very deep and incomprehensible structure.
Assigning keywords to each URL
Knowing the keywords we're going to target and the content structure, we now need to place the specific keyword in the specific URL. But where in the URL should we include it? Especially in:
- URL: The URL itself must contain the keyword. This also makes its syntax user-friendly, meaning it doesn't include things like "www.domain.com/UJFkid_8l2k." If it's the sandals page, for example, the user-friendly URL would be "www.domain.com/women's-sandals."
- SEO Title: The SEO title is the first sentence that appears on the search engine results page. In Google's case, it should be no more than 55 characters, although the available space is actually measured in pixels: an "i" will take up less space than an "m," even though both are one character.
- Meta description: This is the description that appears below the SEO title. In this case, it shouldn't exceed 155 characters, but again, we need to consider pixels more when determining the ideal length. It's not a factor in organic ranking, but it does encourage users to click.
- Menus: Menu buttons should include the main keyword, or at least part of it. For example, if we clicked on our “Shop” button in the top menu, items named “Stilettos,” “Sandals,” and “Flip Flops” would appear, but without “Women’s.”
- H1 Heading: The H1 is the page title, the heading the user will see once they enter the page. Ideally, it should be less than 60 characters long and can't be exactly the same as the SEO title.
- Page text: The keyword should also appear in the page text. "Text" includes more than just paragraphs; it also includes, for example, product names. There is no minimum or maximum number of times a keyword should appear in the text; the best advice is to use it naturally, without forcing it in. Excessive use of this word is detrimental to a page's ranking.
Heading structure
The H1 isn't the only heading that should appear on the page. There are other headings, or subheadings—up to six—that serve to structure the page's content.
Thus, the page will have an H1 title that identifies its content, the main keyword. The content will be divided into H2 headings with synonyms for the main keyword, and these will have H3 headings within them, which will include concepts semantically related to the main keyword. Google doesn't take into account H4 and above, not to mention H5 and H6. However, if you need to use them to further structure the content, that's fine.
For example, the heading structure of this same post is:
(H1) SEO: definition, operation and how to apply it
(H2) What is SEO?
(H2) SEO Basics
(H3) Search engines
(H3) URL or “web page”
…
(H2) How does SEO work?
(H3) Algorithm
(H3) Most important factors
(H3) User query and results
(H2) How to do good SEO
…
Thanks to Hx headings, page content is divided into thematic sections (from generic to specific), making it easier for Google and users to understand. Without Hx headings, we would end up with a block of text where the information would appear all at once, with no clear understanding of where to move from one subtopic to another.
Imagine if we presented you with these 6,000 words, which we've written so far, without any headings. Would you read such a long text or just close the window? You've got it right there.
SEO link building
Link building, as we mentioned in the previous section, is part of off-page SEO and involves getting other websites to link to yours, which translates into a transfer of authority and relevance. If the website were a physical restaurant and each external link were a recommendation from a food critic, the restaurant's popularity would grow, right?
But not all links are created equal. A link from a site with high domain authority (DA), such as a well-known newspaper, is worth much more than one from a little-known blog. It's also important that these links are natural and relevant, meaning they should come from sites related to your topic. Obtaining links artificially or from low-quality sites is penalized by Google, so you need to focus on quality over quantity.
As for the most widespread and accepted link building techniques, we can talk about the following:
Creating quality content
When you like a piece of website content, are you one of those who tend to share it? If so, you were link building without even knowing it! The content that generates the most natural links is informative, useful, and evergreen—content that remains relevant no matter how much time passes (for example, the biography of a famous person).
Collaborations with bloggers
Another good link building technique is to enlist the support of bloggers in your niche. If your website content provides value to their audience, they might link to it from their own sites.
This collaboration may be free (the most recommended option), but other times they may ask for compensation, so it wouldn't be a natural connection.
Guest posts
Also known as guest blogging, this involves writing a post on a website other than your own (hence the term "guest") and placing a link to it. This way, if the DA of the website that "invites" you is high, which is ideal, it will convey authority to the page on your domain that you've linked to.
This technique also serves to reach a new audience, that is, to generate visibility through an external website relevant to your niche.
Advantages and disadvantages of SEO
Like everything in life, organic positioning also gives you good and bad, so now we'll talk about the advantages and disadvantages of SEO.
Advantages of SEO
- Long-term organic traffic: Once your website is well-ranked, you'll receive a steady stream of traffic without having to pay directly for it, as is the case with PPC.
- Credibility and trust: Users tend to trust organic results more than paid ads.
- Better return on investment (ROI): Although SEO requires time and effort, it typically offers a higher ROI over the long term compared to other forms of digital marketing.
Disadvantages of SEO
- Long-term work: SEO doesn't offer immediate results. It can take weeks or months to see the first conclusive results, such as an increase in the number of keywords, improved rankings, etc.
- Constant changes in algorithms: Like all digital marketing channels, SEO relies on algorithms that constantly change and can cause you to drop in rankings in just a day. Therefore, you must always stay up-to-date to avoid these situations.
Black Hat SEO: What You Should Never Do
The first disadvantage we mentioned earlier, long-term work, leads some SEO professionals or agencies to do what should never be done: trick Google or, in technical terms, engage in black hat SEO. But what is black hat SEO?
Basically, black hat SEO refers to any practice carried out to achieve better rankings in an unethical manner; that is, actions that attempt to trick the algorithms in order to climb the rankings quickly.
However, Black Hat SEO is all for today, all for tomorrow, as the algorithms will eventually discover it and penalize that website, either a specific URL or the entire domain.
Black hat penalties, in fact, can go beyond a drop in rankings: they can even lead to the removal of the entire domain from the results pages.
We're not going to leave you hanging, because as SEO experts we know what these bad practices are that we NEVER apply: keyword stuffing on the page, hidden content, cloned pages (one version for crawlers and another for users), parasitic pages, funnel pages or buying backlinks (external links that point to your domain) are the most well-known Black Hat SEO techniques.
Now you know: if any SEO expert suggests using any of these techniques, run away and don't look back. In the long run, you'll win. A lot.
Differences between organic and paid positioning
Now that we're (finally) reaching the end of this article, let's go back to what we said at the beginning.
SEO, as you already know by now, refers to actions that result in appearing in search engine results, meaning free results. PPC, on the other hand, refers to paid search engine rankings, meaning positions achieved by paying search engines.
Is PPC or SEO better for your business?
Over the nearly 10 years we've been with our digital marketing agency, we're clear that PPC and SEO can and should go hand in hand. A strategy based solely on one of them isn't optimal, as PPC and SEO feed off each other:
- The PPC team doesn't typically deal with a website's technical errors—errors that the user doesn't see, but search engine spiders do. The technical SEO team resolves them, and this benefits both your paid and organic results.
- With PPC and SEO, you'll increase your visibility, as you'll occupy more than one position on the results page: users will see your paid result first, followed by your organic result below.
- Some users never click on an ad, but those same users always click on organic results…
The work of SEO agencies
Expert SEO agencies offer all the services you need to position your website on Google and other search engines, such as keyword research, on-page and off-page optimization, and link building strategies, all based on the SEO budget you set. However, many agencies only provide recommendations; in other words, they don't implement the actions they propose, leaving clients with the myriad of questions they may have...
At Maktagg , we work differently. We don't just develop these strategies; we also implement them.
Do you have a website and people only find it by searching for your brand?
Do you want to gain visibility without relying on advertising?
Looking for an agency that guarantees results on Google without doing black hat?
Tell us how we can help you!