# https://developers.google.com/search/docs/crawling-indexing/robots/intro # - "The default assumption is that a user agent can crawl any page or directory not blocked by a disallow rule." # I.e. no need to use "Allow"-directives User-agent: * Disallow: /account Disallow: /min-konto Disallow: /search Disallow: /package/change Sitemap: https://www.strim.no/sitemap.xml