Googlebot queues pages for both crawling and rendering. When Googlebot fetches a URL from the crawling queue, it checks if it's allowed to crawl by reading the robots.txt file. Googlebot parses the response for other URLs in the HTML links and adds the URLs to the crawl queue. To prevent link discovery, use the nofollow mechanism.
Source: Core Principles of SEO for JS