Food for thought: Never underestimate the influence of cultures of materialism behind the extreme hype over new(ish) technologies like #AI. If we(?) were thinking clearly, we wouldn't put a tenth of the faith we seem to put into AI nowadays, which sadly appears to be enough to have power over the lives and deaths of humans. Sure, it can do good things, and where it does, it should be applied; but many applications can be rendered moot just by changing our expectations and living differently.
The best thing about #AI is that it's like a mirror that we can hold up to ourselves. IMO it will never truly reflect our image, but we can compare and contrast our reality with what we see in order to learn more about ourselves. Take the growing number of studies on #bias in AI, teaching us about our own unconscious biases and #prejudice about race, ethnicity, religion, culture, sex and more. We *need* to bring this stuff to light, so we can... deal with where we're at, grow up and move on.
In this respect, this whole thing is actually quite an interesting (if extreme) study in #HCI: What would it take for you to believe that an #AI program you created had literally taken on a life of its own? This sort of parallels some of the thoughts I've been having regarding making @xyzzy work as a stateless storyteller: Is it possible for a #bot to fool a human into thinking it's keeping track of context, when it's actually the human doing that work instead? And if so, just how easy is it?
Read up a little on what I guess we can call the #LaMDA fiasco. It's actually so much more interesting than I had expected. The story isn't so much about #AI as it is about the relationship of the human to the computer, and of the creator to what he's created. It's like the story of Pygmalion played out in the modern day—except I doubt Aphrodite will turn LaMDA into a real person this time.
So here's some spaghetti code I slapped together with #python that uses #ML to "auto-curate" an #RSS feed and identify the most "wholesome" content. It does this by checking keywords of interest, and by running an #NLP sentiment analysis and a naive Bayes classifier. It's definitely a work in progress and still generates a lot of false positives, but I feel like it's a good base to build on. PRs welcome! #AI (ping @benoit)
Could we say, then, that the main problem with #AI, then isn't in the technology itself, but in the way that human beings (try to) use it? It's a tool, like a hammer. If you know what it's good for and how to use it properly, you can do countless great things. Of course, you could also kill someone with a hammer if you're not careful—or, of course, if you're malicious.
Watched @janellecshane's TED talk on the weird ways that #AI can fail. Really good, entertaining talk that addresses pretty much what I've always felt about AI, but hadn't really put into words quite as well: AI can solve plenty of problems, but the real challenge is getting it to solve the problems we *want* it to solve.
So is parrot-computer interaction a legitimate field of study? Like, is anyone working with a lab full of parrots talking to bots to see what kind of trouble they can cause and what items they can order off the internet? If so then here's my CV, please hook me up with the most hilarious job in the world #ai #hci
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!