Welcome to Part 2 of our 3-part SEO basics series, designed to help you “speak SEO” with your team by becoming better acquainted with the basic SEO concepts most often discussed. Part 1 of our series covered basic on-page SEO terms and definitions. In this segment, we’ll define essential SEO linking concepts and search engine directives.
Search Engine Directives: Definitions
Meta robots refers to the automated search engine “robots” — usually referred to as “bots,” such as “Googlebot” and “Bingbot” — that “crawl” the Web, discovering and indexing individual Web pages in the search engine results pages (SERPs).
By using relatively simple HTML (“hyper text markup language”) code, SEOs can “tell” search engine bots what specific Web pages and page-level information to exclude from search results, as well as how to handle the links contained within a Web page’s content, via directives (often referred to as “tags”).
The working definitions of the most common search engine linking tags are:
Index: Tells Meta robots to index the Web page, thereby including it in the SERPs. This isn’t necessary to specify, however, as it is the default setting.
Noindex instructs the Meta robots to exclude the Web page from indexing in the SERPs. For example, if your site has an unfinished Web page, you could ensure search engines don’t display that page until it is ready for user viewing with a “noindex” Meta robots tag. (In terms of HTML code, the command would be <meta name=”robots” content=”noindex”>).
- Note that while a “noindex” directive will prevent the Web page from appearing in the SERPs, the page will continue to be crawled. This means links within the page content will be followed and the destination page of the link indexed, unless a “nofollow” directive is also defined, as discussed below.
Follow directs Meta robots to follow the links included on the Web page to the (destination) pages indicated in the links’ respective URLs (Web page addresses), which they will then index in the SERPs. As with “index,” the “follow” directive is in effect by default.
Nofollow prevents Meta robots from following and indexing the Web page URLs of links included within the original page content.
Using the example (above) of an incomplete Web page from the “noindex” directive, should you want to exclude both the Web page and the link URLs within its content from indexing, the HTML code would read <meta name=”robots” content=”noindex,nofollow”>. A more concise directive recognized by Googlebot for “noindex,nofollow” is simply none (in HTML, it’d be <meta name=”robots” content=”none”>).
- Note that the “nofollow” tag disallows the following/indexing of all links on the Web page level. This is distinct from the “nofollow” attribute that applies to individual links within the page’s content.
Noarchive prevents Meta robots from showing a cached copy of the Web page in search results.
Nosnippet blocks Meta robots from displaying the Web page’s Meta description in the SERPs, as well as the page’s cached copy.
Link Profile Concepts
As search engine bots crawl the Web via links, your website’s link profile plays a key role in determining its search visibility. There are several essential elements to a “healthy” link profile, defined as follows:
Inbound links, also referred to as “backlinks,” are those from other websites linking to yours.
Beginning with the initial release of Google’s “Penguin” update in 2012 and several iterations thereafter, webmasters and site owners have become increasingly vigilant about the quality of inbound links comprising their backlink profile. Targeted at link schemes seeking to pass “PageRank” (which Google describes as its “opinion of the importance of a page based on the incoming links from other sites”) from one Web page to another, the Penguin algorithm has resulted in countless manual penalties – oftentimes leveled at legitimate websites that have unknowingly or unwittingly become ensnared in a link scheme. In recent times, Google will target the penalties to the page with bad links instead of the whole domain.
- Our article on link audits details a 4-step process for assessing link quality. Should you discover unnatural or poor-quality backlinks in your profile, the article also outlines measures you can take. If those fail, then you may have to employ Google’s “disavow backlinks” tool.
- Read our guide to using backlinks effectively for content and SEO success
Outbound links are external links from your site’s Web pages to those of another site. As with inbound links, it is a best practice to exercise caution when linking out; be sure that the website is in good standing with Google and other search engines.
Internal links are those linking separate pages within a website. These links act as a “wireframe” within a site, providing internal structure that assists users with navigation and Meta robots with crawling and indexing.
Anchor text is the clickable word or words used in a link, whether internal or external (outbound).
- Anchor text used to be viewed by search engines as a strong ranking signal, but because spammers abused anchor text (with exact-match keywords) in their attempts to manipulate search engine rankings, the Google search engine scaled back the SEO weight assigned. Now, sites found to be overusing anchor text with exact-match keywords may be subject to manual penalty.
Besides exact match, there are partial match, zero match (generic) and branded (links with the brand name or website URL) anchor text.
That’s a wrap for Part 2 of our guide to basic SEO concepts! Be sure to catch Part 3, where we’ll discuss even more technical SEO terms.