Welcome to the comprehensive SEO guide!
This guide is your starting point for getting higher in search results, as sites get traffic in two main ways:
The first is to rely on marketing, whether free through social networking sites or paid, and the second through search engine results.
The first method may seem faster and more efficient in the short term, but advertising campaigns end after a period of time, no matter how long.
While the traffic coming through search engine results does not end as long as you write valuable content, and follow the SEO rules.
And before you continue, I suggest that you first put this SEO guide in your favorites list to make it easy for you to refer to it at any time.
Let’s start by defining SEO.
Search Engine Optimization (SEO) is an acronym for Search Engine Optimization, and it means that websites are optimized for the way search engines work.
Or we can define SEO as a set of skills and processes that are implemented in order to rank in the first results of a search engine.
But what is the importance of reaching the first result?
Let me ask you: What do you do when you want to know something?
Of course, you open the browser on your mobile device and type in the search box some words that indicate what you want to know.
The first importance of search engines is that all Internet users use search engines to get what they want.
And getting your site to the first search results increases your chances of getting a massive amount of organic traffic.
Not only that, but the appearance of your site in the search results for certain keywords, indicates that your visitor is looking for the information or service you provide.
This way, you get traffic from your target audience who is looking for you.
We can summarize the importance of SEO in the following points:
The SEO field is constantly changing with search engine updates, but the fundamentals are constant and as long as you follow a white hat SEO strategy, you won’t have to worry too much about those updates.
White hat refers to the set of SEO techniques and procedures that are primarily intended to provide valuable content to the visitor.
While the black hat refers to the SEO techniques that make the search engine the focus of the first attention and includes some procedures to deceive it in order to reach the first result quickly.
Indeed, the black hat strategy produces quick results, but it puts the site at risk of being banned or removed from the search engine index, and may also expose it to legal action.
We recommend a white hat strategy, but if you insist on some black hat techniques, be careful.
There are millions of sites, and each site has a goal and a purpose. The goals that a commercial site seeks are different from the goals of the addiction hospital site.
A successful SEO plan seeks to achieve the site’s goal.
This goal must have a measure that indicates it, and it is known as the Key Performance Indicator or KPI for short, which may be:
The scale not only helps evaluate the efficiency of the SEO plan, but also helps the SEO specialist to develop a plan that achieves the goal.
Note that the metrics do not include first appearance in search engine results!
Although it is important to appear in the first search results for the site to achieve a large number of visits, it is a means, not an end.
In fact, if you get a few visits with a high percentage of achieving the goal, it is much better than getting many visits but without achieving the goal
When you type the word “pasta” into the search engine, you probably mean “noodle making,” “pasta pictures,” or “pasta places to shop,” but you are most likely looking for a video of how to make it.
This is what defines the User Intent, i.e. the desired result of typing specific search words.
The function of SEO is to satisfy the user’s desire or intent to provide appropriate valuable content.
But how does the SEO know the user’s intent?
It’s the same way I know you’re probably looking for a pasta video when you type “pasta” into the search engine.
I just typed the word to find the videos Featured Result and ahead of the first result.
We will discuss the featured results later, which is in short the result that most satisfies the user’s desire and appears before the normal search results.
And the presence of videos as a distinct result indicates that most users who type this word in the search engine, prefer results that contain videos than others.
User intent can be grouped into 3 main types
Now you can decide what type of user intent you will base your SEO plan on, based on the site goals you already defined in the previous step.
If the site’s goal is the user’s subscription to a postal service for the latest sports news, you target words with informational intent that differ if the site’s goal is the user’s purchase of agricultural tools, then the focus of the SEO plan here is more commercial intent.
And remember that search engines are only tools to answer user questions, but how do they work?
SEO aims to optimize websites for search engines to top the first search results.
The mechanism of work of search engines depends on three main stages
The Internet is huge and contains millions of pages and sites, and crawling is an exploratory process of Internet content.
Search engines send out huge groups of bots called spiders or crawlers, which start out by exploring a small set of pages.
Through the links on these pages, it moves to other pages in search of any new web content.
When the bots find new content on a page, they take a picture of it and send it to be stored in a huge database.
This index contains an image of all web content found by arachnids in the crawl stage, preparing it for the next stage.
When someone enters search terms, the search engine combs through the indexed content to extract the best results related to the search word.
This process is called ranking and depends on many complex algorithms to show the best result in first order.
In order for any site to appear in the search engine, it must have gone through the crawl phase and the spiders were able to find it on the huge Internet, and it must have been archived in the huge index.
If you cannot find your site in the search results, this means that the arachnids could not find it or were prevented from indexing it.
But do not worry, in this SEO guide, you will learn how to avoid the problems of each stage.
If spiders are unable to find your site, then it will not appear in search engines.
There are several reasons why arachnids can’t access your website, but how do you know if they have already reached it or not?
You can find out how many pages are indexed on your site using Google’s advanced search tool by entering the search tool site: followed by your domain name.
This search tool commands Google to show all pages of this site, and the number of results indicates how many pages of the site are actually indexed.
This is an approximate number and is not exact, but it does give you a glimpse of whether spiders can access and index your pages.
If this tool does not produce any results, it means that spiders cannot reach your site, and you need to fix this problem immediately
There is a robots.txt file in the site’s directory, which serves as a guide to spiders telling them which parts of the site they can browse and which ones they can’t access.
You can access the robots.txt file by entering your site’s Domain name followed by the file name, such as: mashmediaco.com/robots.txt
Note that the file allows spiders to crawl some parts of the site, but not others.
If the file prevents spiders from browsing any part of the site, spiders will not be able to access and index it.
You may fall into this trap if your site is under construction and prevent spiders from accessing it until it is completed, and then after launching you forget to change the format of the robots.txt file.
But can you benefit from preventing spiders from accessing some pages of your site after launching the site?
The answer is, yes.
There may be a section of your site dedicated to employees on the site, and this section you do not want to appear in the search results, as it does not matter to visitors.
Or that one of the sections of your site is under development and updating.
Also, pay attention to the Crawl Budget, sending spiders to surf the web costs search engines money.
It’s best to spare spiders the trouble of crawling sections of the site with less important content that you don’t want to appear in search results.
Here we have prevented spiders from accessing the less important content, now let’s make sure that they reach the actual content of your site.
Arachnids arrive at your site from an external link that points to a page on your site.
But can you navigate between the pages of your site easily and access all the important pages?
Here are some obstacles that may prevent them from seeing your entire site:
If you require your site visitors to sign in to access its content, or you use an anti-bot protection such as questions, spiders will not be able to access the protected content.
If you rely on pictograms to write text, spiders will not be able to understand what you are writing. So it is better to write the text as paragraphs in format.
The site consists of the main page, sections, and sub-pages, and sub-pages may branch out to other sections.
It is important that the site structure is simple and clear for both the visitor and spiders to navigate between the pages of the site.
After spiders have accessed your site from an external link, you may not be able to continue navigation and progress due to:
Sitemap, as its name indicates, gives the robots an overview of the sections of the site and makes it easier for them to navigate between the pages of the site.
You can make sure your sitemap is legible by entering sitemap_index.xml after your domain name eg mashmediaco.com/sitemap_index.xml.
To make sure spiders get to your sitemap, add their link at the beginning of your robots.txt file.
Or you can manually enter it into Google using Google Search Console.
Of course, a sitemap does not replace a clear site structure, but it helps robots to access your important site content.
In the journey of spiders crawling and moving between different links, you may encounter some incorrect links, and this is known as crawl errors.
The presence of a large number of invalid links on your site gives the impression to the search engine that your site is not strong, and prevents spiders from continuing to crawl.
You can get a comprehensive report of all crawl errors encountered by spiders while browsing your site using the Crawl Errors tool from Google Search Console.
Crawl errors are denoted by codes that indicate the nature of the problem or error.
But these codes not only mark the invalid links, but the valid links have codes, but the page is working fine and those codes will not appear.
Let me tell you about the most common crawling mistakes that prevent robots from reaching your site.
One of the most popular types of 4xx code is the 404 code, and you must have met this code while browsing the Internet daily.
This code appears when there is no page associated with the entered link.
Remember what you did when you encountered a link that took you to a page that didn’t exist?
You probably got frustrated and closed the tab or exited the site, which is what most visitors do.
So, although this code prevents spiders from accessing the site, it has a bad effect on the user experience and increases the bounce rate.
One smart way around this problem is to customize the 4xx code error page.
You can modify the interface of the error page to give the visitor different options for navigation.
Sites are located on servers, and in order to display the pages of the site, the browser sends a request to the server to display a link, and the server responds by displaying the page.
If the server is busy, or under updates, it will not be able to respond to the browser’s request, and its link pages will not appear.
Link redirection is one of the most important and widely used methods of handling crawl errors.
It means redirecting a link to a new link automatically.
This technology is used when a page is removed from the site, and its link is redirected to another page or home page.
Or used when linking a website page with a different link.
Link forwarding allows you to:
There are two codes for the redirected page:
But beware of the redirect chain, when you are redirecting the same link more than once.
Arachnids don’t like chain redirects and it affects how long they crawl your site.
So always try to reduce the number of interlinks, instead of converting link 1 to link 2 and then to link 3, redirect link 1 to link 3 directly.
Here you have made sure that spiders can see your site, and easily navigate between its pages, to make sure that they are able to index your site.
You can continue reading the article through the following link
You can also read more about web design