Matt Cutts, the head of search spam at Google, recently released a video in which he discussed a future where links weigh less into the ranking of a website. To save you 3 minutes, Google knows that as long as they’re dependent on links for the majority of their algorithm, it will always be easily manipulated. So it would make sense for them to try everything they can to get away from links as the primary factor in ranking.
Back in 2005, Google filed a patent called Agent Rank for a technique that would allow them to rank a piece of content based on the person who authored it. In theory, when content is added to the web, there would be a digital signature connecting the real life author to the database’s digital author profile. This signature could be unique, and attached to every piece of content that author puts on the web, creating a portfolio of sorts for each author.
An authority score given to different authors, or “Author rank,” could then be used in the future to give weight to new articles and content authored by the author. For instance, the author could launch a brand new website, and because their author rank is high across 20 other websites, Google would associate this author with other great content and potentially give more authority to that website, even though it is new.
Some websites, though, don’t include authorship info, and Google doesn’t have any good way to assign value to these pages outside of links. But that could change with Google’s acquisition of Deep Mind, home to the world’s leading researchers involved with artificial intelligence and deep learning. The term “deep learning” has come around since the mid-2000’s to describe a programming architecture in which it could make connections between different sets of data. It would make sense then, that it deep learning is most effective when it has large quantities of data to sort through and analyse.
Well, as of Jan 2014, Chrome has a dominating market share at 55% of all internet browsing, Google Analytics is on over 15 million websites, the Ad Network reached over 2 million websites, and Gmail is the leader in web mail, so it’s no secret that the amount of data that Google is able to collect is simply unfathomable. Up until now, sorting through so much data and drawing informed conclusions has been troublesome for computers. From TechCrunch,
World-renowned artificial intelligence expert and Google’s new Director of Engineering, Ray Kurzweil, wants to build a search engine so sophisticated that knows users better than they know themselves. “I envision in some years that the majority of search queries will be answered without you actually asking.”
But now, Deep Mind’s AI program will play a role in all of Google’s infrastructure, including search, advertising, and social. With the end goal to document and draw smart connections between the real world’s people, places, events, and things, we must assume that Google is going to be using their mass repositories of data to create individual user profiles for each of us – including authors and readers.
Users will see an increased level of relevancy in searches. For instance, if someone has emails in their inbox discussing the purchase of a new Honda Civic, and then they like Honda on Google+, and finally post pictures of their new Honda Civic to G+ with hashtags, then when that user goes to search for “change spark plugs”, Google will tailor the search results to include videos and tutorials specific to the user’s history, which is changing spark plugs on a Honda Civic.
Conversely, authors will see an increase in engagement from their users. Let’s say I’m a mechanic and operate a blog detailing simple maintenance on Hondas and Toyotas (Japanese cars). Google sees that I frequently discuss related topics to car maintenance, and that I mention Japanese name brands, not American. Hopefully the users who are searching for “how to change my car’s oil” and own Fords won’t see my blog, and by the same token, the time users spend on my site will likely increase because the content is more relevant to their lives.
In the future of the internet, where digital and real life become more integrated, it will be important that we associate ourselves and our businesses with others that are considered to be industry leaders, in hopes of being given credit and benefit of the doubt based on association. As such, we should all start building a digital reputation for ourselves now, so that we aren’t behind when the time comes.
Here at The Search Engine Guys, we take pride in our agility; we’re quick to execute client requests and responsive to Google algorithm updates, all while managing our daily tasks. As a player in the tech and web game, having an edge up on our competition is always important. And we would not be able to maintain our competitive edge without a variety of tools at our disposal, day in and day out.
As the SEO Strategist, the tool I use most when I’m working is Majestic SEO. Majestic offers a myriad of tools to really take a deep look at a given website. Similar to many other online services (including search engines), Majestic SEO compiles what I personally believe to be the largest and most complete data set available. While they are not the only service to provide this kind of reporting, based on my experience, the metrics that Majestic SEO uses are the most beneficial and accurate on the market. From the Majestic page “About Us“,
Majestic SEO surveys and maps the Internet and has created the largest commercial Link Intelligence database in the world. This Internet map is used by SEOs, New Media Specialists, Affiliate Managers and online Marketing experts for a variety of uses surrounding online prominence including Link Building, Reputation Management, Website Traffic development, Competitor analysis and News Monitoring. As link data is also a component of search engine ranking, understanding the link profile of your own, as well as competitor websites can empower rational study of Search Engine positioning. Majestic SEO is constantly revisiting web pages and sees around a billion URLs a day.
The most frequently used Majestic tool would definitely be the Site Explorer. Its easy-to-navigate interface allows large data sets to be parsed and summarized to make sure you see all of the most valuable information about a site all on one page. Simply enter in the URL you wish to investigate, click explore, and voila!
Once you’ve entered in your URL, you’re whisked away to find the Explorer Summary. This page displays all kinds of valuable information, like:
While this page displays only a summary, being able to visualize the important pieces of each section lays the foundation for professional site analysis. There are clickable tabs that allow you to delve deeper into each section, as you’ll notice below:
On May 22nd, Matt Cutts announced that Google had begun rolling out an algorithm update known as Penguin 2.0. Webmasters were quick to notice that many of the sites that lost ranking on the Search Engine Results Page (SERP) had something in common – over optimized anchor text. What had once been a viable strategy was now being frowned upon. Without a tool like Majestic SEO Explorer, noticing these trends would have been much, much harder.
Another thing to love about Majestic SEO is the multi-tiered pricing structure. For starters, if you are interested in exploring a site that you own, there is free access by going through a quick validation process. This free access extends only to exploring your own site, but for the average website owner or in-house marketing guy, this is a perfect taste of what is to come with the monthly pricing.
The first paid package is the Silver tier at $49.99 a month. Compared to Majestic’s two major competitors, this entry level tier is easily the most affordable. This package offers everything that an individual or small business would need, including 60 detailed reports a month, and a maximum of 5 million analyzable back links.
The second paid tier is the Gold tier, at $149.99 a month. Compared to Majestic’s competitors, this mid-level package is still the most affordable. The major differences in the Silver and Gold packages are the increased number of detailed reports allowed monthly (300 a month) and an increase in the maximum number of analyzable back links (25 million). This tier will offer plenty of functionality and has plenty of allocated resources for even a large, enterprise-level corporation.
The final, largest tier is Platinum. At a whopping $399.99 monthly, you are offered the ability to run a staggering 950 reports per month and analyze up to 100 million back links. The big difference in the Gold and Platinum tiers is the ability to access the Majestic API, giving developers easy access to parse the database and extract information to build their own reports.
At The Search Engine Guys, we have subscribed to and used Majestic SEO on a daily basis for over 3 years. We would not be able to handle the high volume of client reporting and data analysis we examine without the tools provided by Majestic SEO, and recommend it to anyone looking for a comprehensive way to do website analysis. With continued updates and support, the team behind Majestic continues to impress us, and we’re looking forward to maintaining our relationship with the company for years to come.
To contact the author, emails can be sent to: firstname.lastname@example.org
Since the idea of the internet and websites first caught on around 1989, marketers and designers have been studying intently to find the best way to leverage consumer interaction. Just like in traditional marketing, a great deal of research is conducted with the intent of discovering how web users engage with the information on any given website. When The Search Engine Guys take on a new client for design and optimization, one thing we try to keep in mind is the way people naturally scan any given page for information that they’re looking for. Luckily, we don’t need to spend much time researching this user interaction because there are plenty of other groups interested in this kind of data, and they have made it readily available to anyone who searches for it. I want to provide a little bit of insight into some of the more popular studies on eye-tracking and how we use these data in our designs.
The idea of tracking eye movements and creating diagrams dates back to the late 1800’s when a French ophthalmologist noticed his test subjects were reading in a series of short stops and quick movements, as opposed to a long, smooth sweep. Here is an early diagram of fixations and saccades, the quick movements from point to point:
This observation was further explored in the 1900’s, first with primitive contacts that had aluminum pointers, and later by reflecting beams of light off of the subjects’ eyes and onto a film. It was later observed that eye movements are largely dependent on the task given to the user. To quote Alfred L. Yarbus,
“Records of eye movements show that the observer’s attention is usually held only by certain elements of the picture…. Eye movement reflects the human thought processes; so the observer’s thought may be followed to some extent from records of eye movement (the thought accompanying the examination of the particular object). It is easy to determine from these records which elements attract the observer’s eye (and, consequently, his thought), in what order, and how often.”
In the 1980’s we saw the advent of real-time eye tracking using computers. This allowed for a much more accurate depiction of how the user interacts with any given image or text. The pieces were finally coming together to lay the foundation for eye tracking on web pages.
Well, not in the scary big brother sense. In 2009, Microsoft sponsored a popular study titled, What Do You See When You’re Surfing? Using Eye Tracking to Predict Salient Regions of Web Pages. The premise of the study was to gain “an understanding of how people allocate their visual attention when viewing Web pages”. While there had been similar studies in the past, the researchers point out that these studies were generally ambiguous, only identifying scan paths as opposed to fixation time, or using only three different sample pages for test subjects. Leveraging an eye-tracker built by Tobii Technology, Microsoft presented 361 web pages to 20 test subjects. With the data they collected, Microsoft was able to describe the general flow of eye movements, which provides us with invaluable information about user interaction. A few notable facts:
The study suggested that those who use the internet once or more a day spend less time actually reading the content, and scan pages faster than those who do not use the web as often. Other findings include:
The most linked-to research done on eye-tracking was conducted by the Nielson Norman Group, or NN/g. The study is a 355 page report based on usage data from over 300 users looking at hundreds of different websites. The findings revealed many important insights.
We found that users’ main reading behavior was fairly consistent across many different sites and tasks. This dominant reading pattern looks somewhat like an F and has the following three components:
- Users first read in a horizontal movement, usually across the upper part of the content area. This initial element forms the F’s top bar.
- Next, users move down the page a bit and then read across in a second horizontal movement that typically covers a shorter area than the previous movement. This additional element forms the F’s lower bar.
- Finally, users scan the content’s left side in a vertical movement. Sometimes this is a fairly slow and systematic scan that appears as a solid stripe on an eyetracking heatmap. Other times users move faster, creating a spottier heatmap. This last element forms the F’s stem.
F-shaped Heat Maps
NN/g also reported several interesting bits about different reasons users utilize search functions, how they analyze those search results, and how they choose which to choose:
One final point worth mentioning from this study is the classification of scanning behaviors, and why people may read more or less content on your website.
- Exhaustive review: People look extensively and repeatedly at an area or page because they expect the information they want to be there, but they cannot find it.
- Directed scanning: A person looks for specific information such as a name or word and expects to find it on the page.
- Motivated scanning: Scan patterns fueled by good page layout, interesting content, personal interest, or a trusted suggestion.
- Impressionable scanning: A person is more open to reading the words as the author has written them.
Here at The Search Engine Guys, we pay attention to details like these to make sure that our clients’ websites are optimized to capture the attention of the user. Our goal is to make sure the user has quick access to whatever information they may have been looking for. Contact and brand information is seen quickly, followed by easy navigation to deeper areas of information. Thinking about it, it makes sense that web designers use this kind of knowledge to help map the flow of a website. Being able to leverage the instincts of a user means higher conversion rates and in turn, more leads, and that’s good news for everyone.
To contact the author, emails can be sent to: email@example.com