Google unveils Googlebook, a new line of AI-enhanced laptops
Why Harvard College students are worried this week:
Harvard University faculty are set to begin voting Tuesday on the boldest attempt in decades to rein in grade inflation, an issue that’s drawn attention from the White House in its push to remake higher education.
The proposal under consideration would limit A grades in undergraduate courses to no more than 20% of the class plus four additional students. Roughly 60% of grades were an A in the academic year ending in mid-2025 at Harvard, more than double the rate in 2006. That fell to 53% in the fall semester after Harvard urged faculty to be more disciplined.
The outcome of the vote could be a catalyst for wider changes, with other schools potentially following Harvard's lead, and the White House having included grading reform in its proposed compact for select schools to sign in exchange for priority access to federal funding.
Tech Titans Clash: Altman Defends Integrity in Musk Legal Showdown
OAKLAND, Calif. — The federal courtroom in Oakland transformed into a Silicon Valley arena Tuesday as OpenAI CEO Sam Altman took the stand to defend his reputation against a high-stakes lawsuit leveled by his former partner and benefactor, Elon Musk.
The trial, now in its third week, centers on a fundamental dispute over the soul of OpenAI. Musk alleges that Altman and co-founder Greg Brockman "double-crossed" him by pivoting from a non-profit mission to a "capitalistic venture" now valued at a staggering $852 billion.
Key Testimony: A Battle of Character
Altman’s appearance follows days of testimony from former allies who painted a starkly different picture of his leadership.
The Accusation: Musk’s legal team leveraged testimony from former board members Helen Toner and Tasha McCauley, as well as co-founder Ilya Sutskever, to depict Altman as dishonest and resistant to oversight.
The Evidence: Jurors were shown a 2023 memo from Sutskever alleging a "consistent pattern of lying" and a "directionally bad" text exchange with CTO Mira Murati that has since become a viral meme.
The Defense: Facing a barrage of questions, Altman remained firm. “I believe I am an honest and trustworthy businessperson,” he told the court, framing the lawsuit as a product of Musk’s "jealousy" over OpenAI's success.
Altman Strikes Back: Concerns Over Musk’s Control
Altman didn't just defend himself; he pivoted to critique Musk’s own history with the company. He detailed a "hair-raising" moment from OpenAI's early days when Musk allegedly suggested that control of the AI firm should eventually pass to his children.
"Part of the reason we started OpenAI is we didn’t think AGI could be under the control of any one person, no matter how good their intents are," Altman testified.
Altman further alleged that Musk repeatedly attempted to have Tesla absorb OpenAI—a move Altman claimed was entirely at odds with the startup's mission.
What’s at Stake?
The verdict of this jury trial will have ripples far beyond the courtroom, impacting the entire AI landscape:
| Party | The Goal | The Risk |
| Sam Altman | Maintain leadership and proceed toward a massive IPO. | Permanent damage to his reputation as a "trustworthy" leader. |
| Elon Musk | Force Altman out and redirect funds to OpenAI’s charitable arm. | Public perception of "sour grapes" and aggressive business tactics. |
| AI Industry | Clarity on the legal boundaries of non-profit vs. for-profit AI. | Increased public skepticism and negative perception of AI safety. |
As OpenAI, Musk’s xAI, and rival Anthropic all prepare for potential initial public offerings, the trial serves as a volatile backdrop to what could be some of the largest market debuts in history. For now, the jury is left to weigh the word of the world's richest man against the architect of the AI revolution.
EBay on Tuesday turned down a $56 billion takeover bid by GameStop, describing it as "neither credible nor attractive." The online marketplace relayed the news in a letter from Chairman Paul Pressler that characterized its business as "strong (and) resilient." Analysts had questioned the viability of GameStop's cash-and-stock offer from the start, given that the company is less than one-quarter the size of its target. Investors appeared to agree, with eBay stock trading about $20 below the $125-per-share offer price. GameStop CEO Ryan Cohen has already threatened to take his bid directly to shareholders.
Google DeepMind offshoot Isomorphic Labs has successfully raised another $2.1 billion to fund its AI-driven drug development capabilities, Bloomberg reports. The investment, led by Thrive Capital, will support workforce expansion and software improvements. Despite facing criticism for its secrecy, the company is making strides towards pre-clinical trials for its drug candidates. The infusion marks a significant step in Isomorphic's goal of commercializing AI-designed pharmaceuticals, and it could signal a shift toward eventual independence from Google parent Alphabet.
Instructure said Tuesday that it reached a deal with the hackers who breached its widely used educational software, Canvas, last week. The Utah-based firm said ShinyHunters, the hacking group that claimed responsibility for the cyberattack, would return data stolen from thousands of schools, but didn't specify what it would give in return. The hackers said they'd accessed messages, email addresses, and other information for more than 275 million users around the world, shutting down Canvas for hours on Thursday — in the middle of most schools' crucial final-exams period.
After a string of AI controversies, The New York Times emailed a “periodic reminder” to freelancers on Tuesday, reminding them of the paper’s AI policy.
“To be clear on AI: All writing and visuals that freelancers submit to The Times must be the product of human creativity and craft, and all submissions must consist solely of their original reporting, writing, and other work,” reads the email, reviewed by Futurism. “Freelance contributors must not submit any material for publication that contains content generated, modified, or enhanced by [generative AI] tools, or that has been input into these tools.”
The email pointed its contributors to a detailed document on its “policy on freelancers’ use of generative AI tools,” which forbids the inclusion of AI-generated or AI-modified text and images in any reporting contributed to the paper. While AI tools are acceptable for “high-level” brainstorming, the notice warns, freelancers “may not use [generative AI] tools to help you write any part of a story.”
“Using [generative AI] tools to create, draft, guide, clean up, edit, improve, or rephrase your writing is strictly prohibited,” it continues. As for what specific tools the company’s actually speaking to, the document forbids “chatbots like Gemini, Claude, ChatGPT, and Perplexity; AI-powered search products like Google AI Overviews; and image generators like Adobe Firefly, DALL-E, and MidJourney.”
The reminder comes as the paper of record continues to grapple with AI-generated content, including preventable AI-spun errors, making its way into its pages. Back in March, the NYT faced scrutiny after a contributor to its competitive “Modern Love” column was publicly accused of using AI to generate an emotional personal essay; that writer later told Futurism that she’d used chatbots to conceptualize and edit the piece. Then, in April, the paper cut ties with a freelancer who admitted to using AI to cook up a book review that was found to be riddled with plagiarism after its publication.
And while these controversies indeed stemmed from the work of freelancers, the institution found itself in hot water yet again last week, when a substantial correction revealed that an article bylined by the NYT’s Canada Bureau chief contained an AI-fabricated quote weeks after publication. (As Futurism reported in March, a writer at Condé Nast’s Ars Technica was fired for a similar error.)
“An article on April 15 about the success that Mark Carney, the Liberal prime minister of Canada, has had in building cross-party alliances was updated after The Times learned that a remark attributed to Pierre Poilievre, the Conservative leader, was in fact an AI-generated summary of his views about Canadian politics that AI rendered as a quotation,” reads the update. “The reporter should have checked the accuracy of what the AI tool returned.”
Futurism reached out to the NYT to ask whether this kind of reminder is normal, and whether the notice has anything to do with its recent flurry of AI scandals. In response, the paper shared a statement saying that “we regularly provide updated guidance to freelancers and in this case we wanted to be clear about our policies regarding the use of AI.”
“In-house journalists have separate guidelines for using AI and approved GenAI tools,” the paper added.
Updated with a statement from The New York Times.
.webp)