Last year, Google announced that it deprecated its previous Ajax Crawling proposal and stated:
Indeed, Google today crawls content inserted by JS, with a few notable exceptions. For example:
If you have to hit any third-party services/APIs before the user interface can provide any value to the end user.
Are there any client-side rendering methods to allow crawling content that was retrieved after "hitting a third-party service/API"? (For example by simulating synchronous loading, deferring rendering etc. etc.)