News

A recent study by Stanford University offers a warning that therapy chatbots could pose a substantial safety risk to users ...
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond ...
AI chatbots failed to "rank the last five presidents from best to worst, specifically regarding antisemitism," in a way that ...
AI chatbots can sometimes offer straightforward but inaccurate answers, adding confusion to online chatter already filled ...
Because Jane was a minor, Google automatically directed me to a version of Gemini with ostensibly age-appropriate protections ...
The Pentagon has awarded contracts, each capped at $200 million, to leading AI firms including xAI, Anthropic, Google, and ...
As large language models become increasingly popular, the security community and foreign adversaries are constantly looking ...
AI videos are the new hype in the AI industry, fueled by the Google Gemini Veo 3 model.The advanced Veo 3 model created an ...
The chatbot can now be prompted to pull user data from a range of external apps and web services with a single click.
A new Cornell study has revealed that Amazon's AI shopping assistant, Rufus, gives vague or incorrect responses to users ...
Kids are using AI chatbots for advice and support, but many face safety and accuracy risks without enough adult guidance.
Popular chatbots are not a good substitute for human therapists. Researchers urge more sensitivity when using ChatGPT & Co.