Fix Robots.txt file
What is it?
Robots.txt is a file on your site that communicates to search engines which parts of your Website should be crawled. There are a number of search engines, but typically the biggest concern is making sure that Google can index your site. Robots.txt will tell the Google indexing robot whether all of your page content should be indexed and available to those searching for you, none of it is available, or if only parts of it are available.
Why should you care?
There are several reasons why you would disallow your site from being indexed by any robot, especially Google. If you’ve created content for a specific group of people on your website (content just for employees, or content just for current clients) you will want these people to be able to access content developed for them, but you may not want it to come up in a search result.
What we see most often is people who are in the process of building a new site do not want google to crawl your site until it’s finished and disallow robots. The problem is, people forget to change the allow/disallow permission on their site after going live. Our example for this is: You’ve built a brand new store, you have a grand opening, and unless you told someone your exact address – NO ONE can find you. People can’t look you up on a map, people driving by can’t see you, even people who know the name of your new business or have an idea of what street you are on – they can’t see you.
With robots.txt set to universally disallow indexing, makes your site essentially invisible on Google, Bing, Yahoo, etc. Only people who already have a link to your website will be able to get there.
Let us do the work!
We are happy to:
- Discuss with you if there are any pages that should not be indexed (internal pages, custom client resources etc.)
- Adjust the robots.txt file to accommodate these edits
- Submit your robot.txt file to google
We know you want to know how much your project will cost, but we need a little more information to give you a fair estimate.