Robot.Txt is a text record design that is utilized for Web optimization (Site improvement), which contains mandates and directions to file the robots of web indexes that distinguish which sites can and can't be slithered. Robot.txt was made to stay away from issues with web search tools 'creeping' your site accidentally. The principal reason for the 'robot.txt' record is to let crawlers know which pages they can and can't file on your site. Robots.txt goes about as a list of chapters for your site, assisting web search tools with understanding how best to creep it.
The document of the robots.txt is a component of the REP (robot's rejection convention), which administers how the robots investigate the site, record material, access the site, and arrive at that substance to individuals. The REP (robot's rejection convention) likewise contains guidelines, for example, meta robots, as well as headings for how web indexes ought to decipher joins, (for example, "follow" or "no follow") on a page, subdirectory, or webpage wide premise.