Ambitious Minds Technical Site Audit
As part of the Ambitious Minds Website Audit, you will receive recommendations on which actions you need to take first to ensure that your website is up-to-date, will rank well on search engines and, above all, will make your visitors go away happy that they found what they needed without fuss.
Glossary of terms used
We know that a number of the terms and acronyms used when talking about websites can be confusing, which is why we’ve put together this to help you make sense of the terms used in our Technical Site Audit.
HTML stands for Hyper Text Markup Language and is the standard language used to create web pages. It is written using elements called tags, which are commands in angle brackets (like <html>). HTML tags most commonly come in pairs of opening and closing tags, such as <title> and </title> or <h1> and </h1>.
A H1 tag is used to mark the importance of heading text on a webpage. The H1 to H6 tags are used to define HTML headings. H1 defines the most important heading, like a newspaper headline, while H2 and H3 are used for sub-headings. H6 defines the least important headings.
Along with other “on page” elements, the use of H1 tags is recommended for search engine optimisation, which means making your website more likely to be seen by search engines and therefore to appear higher on search returns. For search engines like Google, the use of an H1 tag for a given word or phrase is a signal that the page is probably focused on that keyword or phrase and is therefore relevant to users’ search queries which contain it.
For the heading on this page, an H1 tag is used, while H2 and H3 are used for sub-headings, as you can see here:
The title is written like this: <h1>Ambitious Minds Technical Site Audit</h1>
The main sub-heading is written like this: <h2>Glossary of terms used</h2>
All other sub-headings are written like this: <h3> Meta description</h3>
Meta descriptions are HTML attributes that explain the content of the web pages. They are commonly used on search engine results pages (SERPs) to display preview snippets for a given page.
The example below is from a search for Ofsted:
Meta descriptions, while not important to search engine rankings, are extremely important in getting users to click through from SERPs. These short paragraphs are an opportunity to advertise content to searchers and to let them know exactly whether the given page contains the information they’re looking for.
The meta description should employ the keywords intelligently but also create a compelling description that a searcher will want to click. Direct relevance to the page and uniqueness between each page’s meta description is key. The optimal length for a meta description is between 150 – 160 characters, so it is worth the effort of composing a description to fit within this length, rather than having your description cut off (as seen in the first example above).
A spider is a programme that visits web sites and reads their pages and other information in order to create entries for a search engine index. The major internet search engines all use such programmes, which are also known as “crawlers” or “bots”. Spiders are typically set to visit sites that have been submitted by their owners as new or updated. Entire sites or specific pages can be selectively visited and indexed. Spiders are so called because they usually visit many sites in parallel, their “legs” spanning a large area of the “web”. Spiders use several tools to search through a site’s pages, one of which is to follow all of the hypertext links in each page until all those pages have been read. Depending on the spider, not all pages of your site will be read and there are several HTML commands that you can use to try and limit the pages they can see (such as printer-friendly pages).
The robots attribute, supported by the major search engines, controls whether search engine spiders are allowed to index a page, or not, and whether they should follow links from a page, or not. The attribute can contain one or more comma-separate values.
The noindex value of an HTML robots meta tag requests that automated Internet bots avoid indexing a web page. Reasons why one might want to use this meta tag include advising robots not to index a very large database, webpages that are very transitory, pages that one wishes to keep slightly more private, or the printer and mobile-friendly versions of pages. Since the burden of honoring a website’s noindex tag lies with the author of the search robot, sometimes these tags are ignored. Also the interpretation of the noindex tag is sometimes slightly different from one search engine company to the next.
nofollow noarchive nosnippet
nofollow prevents links from being crawled. Other values recognized by one or more search engines can influence how the engine indexes pages, and how those pages appear on the search results. These include noarchive, which instructs a search engine not to store an archived copy of the page, and nosnippet, which asks that the search engine not include a snippet from the page along with the page’s listing in search results.
HTTP Error Messages
Sometimes when you try to visit a web page, you’re met with an HTTP error message. It’s a message from the web server that something went wrong. In some cases, it could be mistake that you’ve made (such as mistyping the URL), but more often than not it’s the web site’s fault.
Each type of error has an HTTP error code dedicated to it. For example, if you try to access a web page that doesn’t exist or has been deleted, you will be met with the familiar “404 page not found” error.
A sitemap is a file that lists the web pages of your site. They can tell search engines about the organisation of your site content and some search engine spiders, such as Googlebot, read this file to more intelligently read and index your site.
A key word phrase is generally two or more words that are used in search engine optimisation. They are entered into search engines by users searching on a specific topic and are used by the search engines for their search returns and adverts. For example, a school that hires out its facilities might use “sports centre hire” as a keyword phrase on its website or in its online advertising to attract interested users. When those users search for “sports centre hire” in the search engine, they would be matched with the keyword phrase on websites which mention the same keyword phrase.
URL is an acronym for Uniform Resource Locator and is a reference (web address) to a resource on the Internet. A URL has two main components: Protocol identifier: For the URL http://ambitiousminds.co.uk , the protocol identifier is http . Resource name: For the URL http://keepthecashgame.com, the resource name is keepthecashgame.com.
Most web browsers display the URL of a web page above the page in an address bar.
Alternative or “alt” text is used to describe an image or other element on a web page. In the case of images, it should convey the same essential information as the image itself. In situations where images are not available to the reader, perhaps because they have turned off images in their browser or are using a screen reader due to visual impairment, the alt text ensures that no information or functionality is lost. Absent or unhelpful alt text can be a source of frustration for visually impaired users of the web.
In the example above, the alt text for the image in a BBC news story is “_80995840_gettyteacherpointing”, which appears to be the file name of the photo and is confusing to the reader. The alt text could have read “Teacher directing students”, which would be much clearer to the user.
It takes a bit of time and effort to get it right but worth it in the end.
Some webmasters use content taken (“scraped”) from other, more reputable sites on the assumption that increasing the volume of pages on their site is a good long-term strategy regardless of the relevance or uniqueness of that content. This paragraph courtesy of Google Webmaster Tools.
Google Analytics generates detailed statistics about a website’s traffic and traffic sources and measures conversions and sales. It’s the most widely used website statistics service
Google Analytics can track visitors from all referrers, including search engines and social networks, direct visits and referring sites. It also tracks display advertising, pay-per-click networks, email marketing, and digital collateral such as links within PDF documents such as a school’s prospectus.
In Part Two we’ll show you all of the areas of a website we check for errors and how we can improve the delivery of a website to a viewer.