In the days of yore, everything that appeared on a web page was stored on a server or generated by a process running on a server. Web browsers received and rendered HTML pages that were more-or-less complete in themselves. From a search engine crawler’s perspective, that was all good, because it could just take the HTML and index the content.
It’s helpful here to understand the difference between the code sent by a server and the DOM — the Document Object Model. The DOM is used by browsers to render the page that appears to the user. When all the content of a page is sent from the server, there’s an obvious mapping between the code and the DOM. If a web crawler like Googlebot can understand the HTML and CSS the server sends, it knows all it needs to know about the page.