The robots.txt file is a text file that web crawlers request from sites to verify whether the site can be indexed. This is typically used by search engines.
You may ask, why is the robots.txt file not accessible for the default domain of a site e.g mywebsite.websitepro.hosting.
The robots.txt file is not accessible on the default domain (e.g. mydomain.websitepro.hosting), and the staging domain (e.g. mydomain-staging.websitepro.hosting), of a site for white-labeling purposes.
By comparison, the file is accessible when accessed from the default domain of a site (e.g https://www.mydomain/robots.txt)