Website Architecture And SEO

Digitell Blog Website Architecture and SEO

SEO is a concept that is typically couched in terms of ideas like organic searches and click-through rates. What is often less emphasized, but is of equal significance, is how the actual structure of a website impacts SEO. SEO is an extraordinarily nuanced process, and requires deep and layered knowledge to exploit fully. Here are some tips detailing how you can utilize SEO through your website architecture.

Hierarchies

Search engines like Google and Bing use web crawling and caching as the manner to which they index and order the content presented to them. Web Crawlers are essentially automated scripts which browse the Internet in an ordered and methodical manner. This feature is the centrepiece to which all of your website architecture is to be designed around, as doing so will have an unequivocally positive impact on your SEO. One manner to which you can exploit this feature is through the development of a hierarchy.

A hierarchy is a way to organize the information presented in a website. Basically it is the idea that all pages are subdivided into different groups, and link back to one central page. Conceptually, the home page would be at the top of this hierarchy, and errant pieces of information about products and services would be at the bottom, with main categories separating the two. The main categories themselves should be distinct, sharing very few if any keywords between them. There should also be a relatively small amount of main categories (between two and seven). Furthermore, the constituent subcategories that make up their main counterparts should be as evenly distributed as possible, as you don’t want clustering of data on any one main category.

You can further implement this ethos into the URL structure of the website itself. The URL schema should be constructed in such a way that it mirrors the site hierarchy, with higher-level categories preceding the lower-level ones. So for instance, the web page for the cordless drill power tool range in Bunnings Warehouse is as follows: www.bunnings.com.au/our-range/tools/power-tools/cordless-drills. Notice the hierarchical flow of higher order information to lower ones.

The purpose of this type of site structure ties back into Web Crawlers. Web Crawlers tend to favour data that can be easily understood by its algorithm. As such, sites must be structured in a way that is conducive to this algorithm. Sites that employ a trickle down model of information flow play to the strengths of the Web Crawler system, as they are able to travel more easily through the pages of the domain. This in turn helps with a site’s SEO.

Avoiding Javascript and Flash

Traditionally, Google has had trouble with indexing website architecture written in JavaScript, and with recognising any text nested inside Flash. Though Google has advanced in the development of their Web Crawling software to mitigate these issues, their use is still a net negative on SEO.

The issue with JavaScript is quite simple; Google Web Crawlers are simply designed around texts written in HTML or CSS. Thus, when site architecture is constructed in Java, it hampers Web Crawler movement throughout all site pages, and decreases the site’s presence in search engines. By contrast, data coded in Flash has the issue of registering as images in Web Crawler analysis, even if you require it to register certain stores of information as text.

The way to circumvent this issue is simple; opt to write all of your website’s code in HTML or CSS. This greatly reduces the chances of your site to hide links from Web Crawlers, thus increasing the likelihood they will display in search engines.

Develop An Internal Linking Structure

Internal links are links that go from one page on a domain to another page on that same domain. They are commonly used in order to facilitate website navigation. If applied correctly, they can also aid in the process of constructing a more SEO-friendly site.

For a search engine to list pages in their massive keyword-based indices, Web Crawlers need to see content. Additionally, they also require a crawlable link structure in order to find all of the pages on a website. So for instance, let’s say you have a page dedicated to the product details of a shovel. If this page discussing the shovel is not linked to on any previous pages (i.e. not linked on any of the main categories or sub categories) crawlers will have great difficulty reaching it, even though it exists on the same domain as the homepage. As such, no matter how great the content or keyword targeting on this page may be, it will never show in the search engine results. It is therefore important to employ internal links, usually in the form of supplementary URL structures in order to avoid this issue.

Crawl Budget

Even with a site structure that employs all of the aforementioned techniques, the level of analysis exhibited by a Web Crawler will by no means be exhaustive. Due to the sheer breadth of information that has to be filtered through Google analytics, Web Crawlers will at times delegate what pages in a domain will and won’t be indexed. This limited ability to analyse bits of website data is known as the “crawl budget”.

So when designing a website you are faced with a conundrum. You want to index as much of your website as possible so as to improve your SEO, but are faced with the prospect that a Web Crawler will only spend a limited time indexing your site. Consequently, you want to make sure that the pages the Web Crawlers do land on are significant. There are a few techniques which can be employed to aid you in this venture:

The first is to add the tag “rel=nofollow” to the pages which you think are of little significance. Furthermore, you can actually use JavaScript for some good here, as Google’s difficulty in processing its information will make pages written in it be disregarded by the Web Crawler. Finally, you can block certain pages in your robots.txt file to stop Web Crawlers from infiltrating lower order importance pages.

By limiting the crawl budget on these lower-order pages, search engines will spend the lion’s share of their crawling on the pages that you deem significant. As such, since these pages get greater attention, they will be elevated above sites that refuse to use this technique and instead opt to distribute their crawl budget equally amongst all their pages. So in a way, limiting search engine access to some of your data helps the SEO for other data.

The Importance Of Unseen Details

Website architecture is an easy facet of Internet marketing to ignore, as it is a feature that remains largely hidden to the consumer. It is important to understand however that, though this entire structure remains largely unseen, it is a feature whose ramifications are every bit as grave as the most aesthetic of details on a website. If ignored, SEO, and by extension the success of your website, will suffer dearly.

State Of Digital
Xanthous
Search Engine Land
Kiss metrics

Search Engine OptimisationSEO

Ad Agency Perthadvertising agency perthArchitectureBingcreative agency perthDigital Advertising AgencyDigital Advertising Perthdigital agency perthDigital Marketing Agency PerthGoogleHeirarchyInternal LinkingMarketing Agency Perthonline marketing perthPerth Marketing AgencyRadio Advertising PerthSearch EngineSEOseo perthSocial Media Agency PerthTV Advertising PerthWeb Crawlersweb design perth