If they wanted to safeguard AI, they would actually make the models public. Bad actors are bound to get them anyways, hiding it behind secrecy is very unlikely. And I mean, AI could make a virus infecting most infrastructure on planet (Amazon and Google data centres) and then shutting it down or using it for its own purposes. As several programming memes lay out, the entire modern web infrastructure is surprisingly dependent on just a few APIs and tools
Wanna share my experience too here.
I used YouTube with the algorithm, mostly for educational stuff, like vsauce, kurgsat and tech stuff. I started showing some interest in politics and news, start watching tldr news, then it pulls me into Vox, as I showed some anti trump sentiment. To put it quickly, it didnt take too long for me to realise that I was being drawn to ever more left leaning content (obviously a lot further than merely Vox, second thought and deeper)
Which is why I left algorithmic YouTube by using alternative frontends
Just searched quickly on DuckduckGo, here you go
I’m on their IRC and have reported bugs a few times. However, I haven’t been able to replicate it yet, so I’m not too worried