This is getting crazy. This is a deep-fake livestream. And surprisingly, the people on the stream and skeptical but fooled.
Insane, right? It’s only going to get worse.
Good thing there is a US election coming up. Speaking of Elon, who is a public supporter of Donald Trump, he already shared a deepfake of Kamala Harris.
By the way, if you are on Twitter, make sure to clear out some settings (and targeting) on your X account:
Midjourney is FREE Again
In other news, Midjourney has re-enabled a free version seeing pressure from Flux which is available for free.
Midjourney started as a Discord bot in 2022 and transformed into a website in 2023, allowing users to create AI images with text prompts. Initially exclusive to heavy users, it's now open to all via Discord or Google logins. Newbies get 25 free images before hitting a paywall. There are four subscription tiers ranging from $10 to $120 a month, each offering more GPU time for faster image creation and additional features like private images. Plus, rating images can nab you extra GPU time, and annual subscribers get a 20% discount.
Give it a try!
And OpenAI has seen a straight-up exodus of their AI safety team in the past few weeks.
Alright, let's break it down real simple. So, OpenAI, the brain behind ChatGPT, had a bunch of folks working on keeping AI safe and making sure it doesn't go rogue and start doing its own thing, potentially putting humanity at risk. Now, the buzz is that almost half of these safety-squad members have hit the road in the last few months. Daniel Kokotajlo, a guy who used to work there on governance stuff, spilled the beans that the team has shrunk dramatically.
The folks who left were pretty key players, including a co-founder and some top researchers. They were part of a special crew focused on making sure that if we ever get to creating super smart AI, it would still be under human control. But with these departures, some are worrying that OpenAI might be getting a bit too eager to push out products without fully wrestling with the big, scary "what ifs" of super-intelligent AI.
Kokotajlo himself bailed because he felt the company's vibe was changing – less about safety and more about selling stuff. He pointed out that some big hires and management shifts are making it look like OpenAI's priorities are shifting. Also, some AI bigwigs think all this talk about AI danger is overblown and we should focus on the good AI can do, like fighting climate change.
There's also a bit of drama around California's SB 1047, a law that's trying to put some safety checks on AI. OpenAI isn't a fan, and Kokotajlo and a former colleague wrote a letter calling them out for it. Despite his disappointment, Kokotajlo doesn't regret his time at OpenAI and thinks he did some good. But he's warning that there's a bit of a "groupthink" problem at AI companies, where they might downplay risks because they're all racing to be the first to crack AGI.
In a nutshell, some of the smart folks who were supposed to make sure AI doesn't end up overthrowing humanity have left OpenAI. There's concern that the company is getting sidetracked by profits and products, possibly sidelining the super important job of making sure future AI can be controlled.
Key Point:
These are the people that we need. And think about how in this economy, with people working on the cutting edge of technology from high-paying jobs - they are STILL leaving.
Bonkers!
What AI tools have you been playing with?
Check out SureTriggers. It’s like Zapier but cheaper and a better UI.