Most of the time it's not companies like Google tracking you per see, but using your movements for advertising and marketing toward your preferences.
It's about the cookies...
What we see a lot now with growing privacy concerns is the exploitation of third-party cookies.
While the US has been behind, Europe led the charge with GDPR, and California adopted similar regulations.
GDPR is quite strict and requires companies to do things like completely delete your identity and all records and history at your request.
This is gradually becoming the defacto standard.
Cookies were originally intended to provide "memory" to a stateless HTTP world: keeping your preferences on websites active between visits, keeping items in now-archaic shopping carts.
Then third-party cookies came along that shared data across platforms. While not specifically targeting you as an individual, profiles were built for marketing preferences. This has become massively exploited as it provides tremendous ad revenue generation potential.
Apple's Safari and Firefox have already blocked third-party cookies. Google will do so by the end of the year. What follows is up for debate: Google will use its own system, FLoC (Federated Learning of Cohorts). Others may use Unified ID 2.0, and others may want to use perhaps the most troubling method, "fingerprinting", where all the data of your movements are aggregated to create a potentially high accuracy guess as to your specific identity. Google believes fingerprinting is really bad (it is) and suggests FLoC will be better (it probably is) but FLoC will create a walled garden that only Google has access to.
FB and IG use ID more aggressively and you will see ads for things your friends are even vaguely interested in, but this has been their model all along with filtering and promoting so nothing new.
This practice is the root of all the conspiracy and extremist info spreading like wildfire now, as it creates a self-perpetuating cycle of "recommendations" by popularity and similarity, all based on an exploitive marketing model. Their model is highly exploitative and is designed to create addictive behavior keeping eyes on the screen longer, driving ad revenue.
Alexa, Google Assistant, and Siri all listen but (supposedly) don't record anything without the wake word prompt. This shows that customers can only trust that what comes after the wake word is discarded. After much talk of privacy, Apple was found to be violating its own policy.
As far as government surveillance, AI could be used to collect keyword signals and patterns, but this is still very clumsy and inefficient in most applications. AI has tremendous potential, both good and bad, but it's very immature as of now. It can filter data and look for patterns though, and that's where it will be used for now, replacing and augmenting people. In digital surveillance, it's not practical to have people do anything other than react to signals and trends that might merit escalation.