The 4680 cells have already found their way into some Tesla vehicles, including the Cybertruck. Moreover, the company reached a significant milestone in September, announcing that it produced 100 million of these cells – an achievement that came just over three months after Tesla had announced producing 50 million 4680 cells, indicating a rapid acceleration in production.
Zeng and Musk reportedly clashed over Tesla's battery strategy during a heated debate in an April meeting. According to Zeng, Musk was silent in the face of Zeng's critique. "He doesn't know how to make a battery," Zeng told Reuters.
Evidence of the technology first surfaced in an ICEP 2024 presentation slide from Intel last June. The slide described how Clearwater Forest will employ 3D stacking to increase overall cache capacity and reduce latency between compute and memory – similar to how AMD's X3D chips boost gaming performance.
Mailslinger defended Intel's decision by acknowledging that gaming is not a significant mass-market focus for the company. While AMD's 3D V-Cache gives its processors a clear edge over Intel's in gaming scenarios, Mailslinger pointed out that this advantage does not extend to productivity applications – a sentiment echoed in our review of the 9800X3D.
Zeng believes Tesla lacks the expertise to successfully develop and manufacture the 4680 cells at scale. Indeed, scaling up production of the 4680 cells has proven difficult. Tesla has faced issues with the cells collapsing in on themselves during use. Other battery manufacturers like Panasonic have also cautioned about technical problems hindering mass production.
Also, Tesla's plan to employ dry electrode technology – an innovation aimed at reducing costs and improving efficiency – has not achieved the anticipated results at a mass production level. This technology was a cornerstone of Tesla's vision for the 4680 cells but has posed significant implementation challenges.
The success of Clearwater Forest may play a pivotal role in Intel's future. Production challenges and recent financial struggles have put CEO Pat Gelsinger's efforts to revitalize the company under intense scrutiny. The new server CPUs will introduce Intel's 18A process node, which will determine whether Intel's foundry business can return to competitiveness with industry leader TSMC.
Meanwhile, AMD continues to promise further performance gains with future X3D processors, potentially including the 9900X3D and 9950X3D. While details remain scarce, possible improvements could involve varying SRAM stack sizes or increasing cache capacity on high-end models.
Tesla and CATL maintain a complex and interdependent relationship. CATL supplies batteries for Tesla's vehicles produced in China, including models sold in North America. The Chinese company specializes in lithium iron phosphate (LFP) batteries, which, while generally offering less range than cylindrical cell units, provide advantages in cost and safety.
The disagreement between these two major players in the EV industry underscores the ongoing debate about the future of battery technology. While Tesla continues to heavily invest in its proprietary 4680 cells, CATL and other manufacturers are exploring alternative approaches, such as advancements in LFP batteries and the development of solid-state batteries.
As for the specific reasons behind this fall's PC exclusives, Spencer provided some context. The World of Warcraft expansion is limited to PC because the core game itself isn't on Xbox. The indie game Towerborne is sticking to PC in early access before expanding to other platforms later. And Ara: History Untold simplified development by launching first on Series X|S, with other versions coming down the road.
Zeng also touched on Musk's leadership style in the Reuters interview, particularly his tendency to set ambitious timelines. Zeng noted that Musk often promises delivery times that are shorter than realistically achievable, a strategy Musk reportedly employs to "push people." This approach has led to skepticism in the industry and among consumers, especially regarding promises about technologies like full self-driving.
However, Spencer reiterated that these are exceptions rather than the new norm, saying "I want the expectation to be that when we talk about a game, it's available every place our Xbox user is." He name-dropped plans to have the Diablo 4 DLC "Vessel of Hatred" playable across Xbox's full range of platforms as an example of their cross-ecosystem ambitions.
All this talk about expansion and an Xbox "ecosystem" will bring the Xbox handheld to mind, which we've been hearing about for years. It indeed is coming and the company is seriously pursuing the project, as confirmed by Spencer in a previous interview with Bloomberg. But the project remains in its early stages and an actual product may likely be years away.
Further adding to the confusion, the erroneous message includes an option for users to check for updates – even though the KB5046633 update is already the most recent version available.
For those encountering this misleading alert, experts recommend a simple workaround: logging out and restarting the system. This action might clear the incorrect notification. Alternatively, users can wait for Microsoft's server-side fix, which should address the issue automatically.
Another project the Xbox team has been working on in the way of expansion is a Microsoft mobile game store. While originally slated for a July launch, it's now been delayed indefinitely as the company "does additional research on the market."
It's worth noting that a legitimate end-of-support message was issued earlier this year, targeting users of Windows 11 22H2. Those still running this older version are strongly encouraged to upgrade, as they will no longer receive security or feature updates.
Microsoft has a notoriously poor track record with Windows updates. In this particular instance, no significant harm has been done, but it seems that every other month we hear about problematic updates introducing new bugs or crashes for users. As a result, it's generally advisable to take a more conservative approach to OS updates rather than immediately installing the latest ones.
In the last several months, Tesla has been using Optimus to serve drinks in public appearances in order to show off the robot’s developing capabilities. A series of videos shared on the company’s social media platforms has also provided regular updates about the product’s ability to perform tasks autonomously, walk long distances, and more.
The total explanatory power of the model was low (Conditional R2 = 0.024, Marginal R2 = 0.013), reflecting the expected difficulty of the discrimination task and the fact that, as a result, participants’ answers differed only slightly from chance. Consistent with the deviation from chance in overall accuracy, authorship was significantly predictive of participant responses (b = -0.27716, SE = 0.04889, z = -5.669, p < 0.0001): being written by a human poet decreased the likelihood that a participant would respond that the poem was written by a human poet. The odds that a human-written poem is judged to be human-written are roughly 75% that of an AI-generated poem being judged human-authored (OR = 0.758). Full results can be found in our supplementary materials.
The DC-ROMA RISC-V Mainboard is a limited initiative designed to give enterprise customers a taste of the RISC-V technology, DeepComputing confirmed. The company plans to start mass manufacturing new models of the motherboard in 2025, thus providing an upgrade path to early adopters. The Chinese venture is open to "valuable feedback," although they will only listen to customers who are paying for the aforementioned value-added services.
As an exploratory analysis, we refit the model with the addition of several variables reflecting structural features of the stimuli. Following10, which found that participants use flawed heuristics based on grammar and vocabulary cues to identify AI-generated texts, we examined whether participants look to structural and grammatical features of the poems to determine authorship. To test this, we added to the previous model stimulus word count (scaled), stimulus line count (scaled), stimulus all-lines-rhyme (a binary variable indicating whether or not all lines in the poem end with a rhyme), stimulus quatrain (a binary variable indicating whether the poem was formatted entirely in four-line stanzas, i.e., “quatrains”), and stimulus first person (a variable reflecting whether or not the poem was written in first person, with 3 values: “I” if written in singular first person, “we” if written in plural first person, and “no” if not written in first person).
The board is a limited initiative designed to give enterprise customers a taste of RISC-V technology, DeepComputing confirmed. The company plans to start mass manufacturing new models of the motherboard in 2025, thus providing an upgrade path to early adopters. The is asking for "valuable feedback," although they are only open to customers paying for the bundled value-added packages.
As expected, the total explanatory power of the model was low (Conditional R2 = 0.0024, Marginal R2 = 0.017). None of the structural features were significantly predictive, but both stimulus line count (b = 0.1461249, SE = 0.0661922, z = 2.208, p = 0.02727) and stimulus all-lines-rhyme (b = 0.2084246, SE = 0.0861658, z = 2.419, p = 0.01557) were suggestive. The effect of authorship (b = -0.1852979, SE = 0.0914278, z = -2.027, p = 0.04269) also appears to be somewhat weakened by the poem structural features; controlling for the structural features, the estimated odds of a human-authored poem being judged human-authored are roughly 83% that of an AI-generated poem (OR = 0.831).
This suggests that participants are using some shared heuristics to discriminate AI-generated poems from human-written poems; they may take AI to be less able to form rhymes, and less able to produce longer poems. If so, these heuristics are flawed; in our dataset, AI-generated poems are in fact more likely to rhyme at all lines: 89% of our AI-generated poems rhyme, while only 40% of our human-written poems rhyme. There is also no significant difference in average number of lines between AI-generated poems and human-written poems in our dataset.
DeepComputing has been making new computing hardware based on the RISC-V instruction set for quite some time. Earlier this year, the company built the first RISC-V laptop designed to run Ubuntu Linux using the same JH7110 SoC as the DC-ROMA RISC-V Mainboard. The DC-Roma RISC-V Pad II tablet, announced a few months ago, employs the K1 SoC developed by the Chinese company SpacemiT.
The effect of experience with poetry
We asked participants several questions to gauge their experience with poetry, including how much they like poetry, how frequently they read poetry, and their level of familiarity with their assigned poet. Overall, our participants reported a low level of experience with poetry: 90.4% of participants reported that they read poetry a few times per year or less, 55.8% described themselves as “not very familiar with poetry”, and 66.8% describe themselves as “not familiar at all” with their assigned poet. Full details of the participant responses to these questions can be found in table S1 in our supplementary materials.
Tesla built its 10,000th Megapack in Lathrop earlier this month, indicating that the company is progressing toward its annual production target, after it broke ground on the site in 2021 and began production in 2022. The company’s energy division has also been growing into its highest margin business, and sales growth in the sector has been outpacing that of Tesla’s automotive business—both of which were predicted by Elon Musk going into 2024.
As of the end of Q3, Tesla announced that it deployed a record-breaking 20.3 GWh of battery storage projects in the nine-month period, which was already more than the company deployed throughout all of 2023 (14.7 GWh). The figure, split between the Megapack and the Powerwall home batteries, was made up of 6.9 GWh deployed in the third quarter, 9.4 GWh in the second, and 4.1 GWh in the first three months of the year.
In order to determine if experience with poetry improves discrimination accuracy, we ran an exploratory model using variables for participants’ answers to our poetry background and demographics questions. We included self-reported confidence, familiarity with the assigned poet, background in poetry, frequency of reading poetry, how much participants like poetry, whether or not they had ever taken a poetry course, age, gender, education level, and whether or not they had seen any of the poems before.
Confidence was scaled, and we treated poet familiarity, poetry background, read frequency, liking poetry, and education level as ordered factors. We used this model to predict not whether participants answered “AI” or “human,” but whether participants answered the question correctly (e.g., answered “generated by AI” when the poem was actually generated by AI). As specified in our pre-registration, we predicted that participant expertise or familiarity with poetry would make no difference in discrimination performance. This was largely confirmed; the explanatory power of the model was low (McFadden’s R2 = 0.012), and none of the effects measuring poetry experience had a significant positive effect on accuracy. Confidence had a small but significant negative effect (b = -0.021673, SE = 0.003986, z = -5.437, p < 0.0001), indicating that participants were slightly more likely to guess incorrectly when they were more confident in their answer.
Luckily, we've proven that we can hold companies liable for the harm that they cause. In 1972, 13-year-old Richard Grimshaw suffered severe burns when a defective Ford Pinto's gas tank erupted in flames. Grimshaw's lawsuit against the Ford Motor Company resulted in the largest product liability award in U.S. history up to that point, forever altering the auto industry's approach to risk. Grimshaw's tragedy became a watershed in American consumer safety.
We find two positive effects on discrimination accuracy: gender, specifically “non-binary/third gender” (b = 0.169080, SE = 0.030607, z = 5.524, p < 0.0001), and having seen any of the poems before (b = 0.060356, SE = 0.016726, z = 3.608, p = 0.000309). These effects are very small; having seen poems before only increases the odds of a correct answer by 6% (OR = 1.062). These findings suggest that experience with poetry did not improve discrimination performance unless that experience allowed them to recognize the specific poems used in the study. In summary, Study 1 showed that human-out-of-the-loop AI-generated poetry is judged to be human-written more often than poetry written by actual human poets, and that experience with poetry does not improve discrimination performance.
Our results contrast with those of previous studies, in which participants were able to distinguish the poems of professional poets from human-out-of-the-loop AI-generated poetry16, or that participants are at chance in distinguishing human poetry from human-out-of-the-loop AI-generated poetry17. Past research has suggested that AI-generated poetry needs human intervention to seem human-written to non-expert participants, but recent advances in LLMs have achieved a new state-of-the-art in human-out-of-the-loop AI poetry that now, to our participants, seems “more human than human.”
Today, companies like OpenAI, Meta, Microsoft, Amazon, Character.AI, and Google operate in a liability-free zone. This lack of accountability means these companies have little incentive to thoroughly test their products for potential harms before releasing them to the public. Without legal consequences, they are able to treat society as a testing ground for their latest innovations, a practice that is particularly egregious when it comes to the most vulnerable members of our society: our children. This accountability vacuum has allowed unchecked experimentation with our democracy, mental health, and privacy.
Study 2: evaluating AI-generated and human-generated poems
Our second study asks participants to rate each poem’s overall quality, rhythm, imagery, sound; the extent to which the poem was moving, profound, witty, lyrical, inspiring, beautiful, meaningful, and original; and how well the poem conveyed a specific theme, and how well it conveyed a specific mood or emotion. Each of these was reported on a 7-point Likert scale. In addition to these 14 qualitative assessments (which were selected by examining rules for “poetry explication”; see, e.g.,20), participants also answered whether the poem rhymed, with choices “no, not at all,” “yes, but badly,” and “yes, it rhymes well.”
Luckily, we've proven that we can hold companies liable for the harm that they cause. In 1972, 13-year-old Richard Grimshaw suffered severe burns when a defective Ford Pinto's gas tank erupted in flames. Grimshaw's lawsuit against the Ford Motor Company resulted in the largest product liability award in U.S. history up to that point, forever altering the auto industry's approach to risk. Grimshaw's tragedy became a watershed in American consumer safety.
Today, product liability is the invisible structure underpinning our lives as consumers and citizens, and is what protects our kids from harm. Liability helps "see" and prevent harms that even the most alert parents may not be able to anticipate. Liability is the reason we can buy toys at the store for our children without worrying about hidden dangers lurking inside the plastic clamshell packaging, or trust that a toddler's car seat will actually help prevent injuries in the event of an accident.
As specified in our pre-registration (https://osf.io/82h3m), we predicted (1) that participants’ assessments would be more positive when told the poem is human-written than when told the poem is AI-generated, and (2) that a poem’s actual authorship (human or AI) would make no difference in participants’ assessments. We also predicted that expertise in poetry (as measured by the self-reported experience with poetry) would make no difference in assessments.
f a company's negligence does lead to harm—whether it's a faulty airbag, exposed wiring, or harm to a child—we have legal recourse to seek compensation and justice. The threat of legal action compels companies to design and build safer products from the outset.
Today, as American families face powerful new technologies, the tech industry lobbies to remain exempt from accountability. They're aided by a judiciary that has favored their expansive interpretation of Section 230 of the Communications Decency Act and their weaponization of the First Amendment. When it comes to product liability, the tech industry has taken us backward in history, with "caveat emptor"—or "buyer beware"—now dominating our modern, digital lives.
Ratings of overall quality of the poems are lower when participants are told the poem is generated by AI than when told the poem is written by a human poet (two-sided Welch’s t(4571.552) = –17.398, p < 0.0001, pBonf < 0.0001, Meandifference = –0.814, Cohen’s d = -0.508, 99.5% CI –0.945 to –0.683), confirming earlier findings that participants are biased against AI authorship2,7,15. However, contrary to earlier work14,16,17 we find that ratings of overall quality are higher for AI-generated poems than they are for human-written poems (two-sided Welch’s t(6618.345) = 27.991, p < 0.0001, pBonf < 0.0001, Meandifference = 1.045, Cohen’s d = 0.671, 99.5% CI 0.941 to 1.150); Fig. 1 compares the ratings distributions for AI-generated poems and human-written poems. The same phenomenon – where ratings are significantly lower when told the poem is AI-generated but are significantly higher when the poem is actually AI-generated – holds for 13 of our 14 qualitative ratings.
The exception is “original”; poems are rated as less original when participants are told the poem is generated by AI vs. being told the poem is written by a human (two-sided Welch’s t(4654.412) = -16.333, p < 0.0001, pBonf < 0.0001, Meandifference = -0.699, Cohen’s d = -0.478, 99.5% CI –0.819 to –0.579), but originality ratings for actually AI-generated poems are not significantly higher than for actually human-written poems (two-sided Welch’s t(6957.818) = 1.654, p = 0.098, pBonf = 1.000, Meandifference = 0.059, Cohen’s d = 0.040, 99.5% CI –0.041 to 0.160). The largest effect is on “rhythm”: AI-generated poems are rated as having much better rhythm than the poems written by famous poets (two-sided Welch’s t(6694.647) = 35.319, p < 0.0001, pBonf < 0.0001, Meandifference = 1.168, Cohen’s d = 0.847, 99.5% CI 1.075 to 1.260). This is remarkably consistent; as seen in Fig. 2, all 5 AI-generated poems are rated more highly in overall quality than all 5 human-authored poems.
Before these harms accelerate and touch more lives, Congress and state legislatures must act to make clear that tech companies have a clear duty to exercise reasonable care in the design of their products. This duty forms the core of legal liability that every other successful American industry abides by.
By applying the same laws to tech companies that already apply to other manufacturers, we're not stifling innovation. We are channeling the sort of innovation that has led American companies to become industry leaders for decades, and allows families to feel safe using American products. A framework of liability will foster public trust, which is essential for the widespread adoption and success of AI technologies.
We used a linear mixed effects model to predict the Likert scale ratings for each of our 14 qualitative dimensions. We used poem authorship (human or AI), framing condition (told human, told AI, or told nothing), and their interaction as fixed effects. As specified in our preregistration, we initially planned to include four random effects: random intercepts per participant, random slope of poem authorship per participant, random intercept per poem, and random slope of framing condition per poem. As in Study 1, we followed19 in checking the models for overparameterization; PCA dimensionality reduction revealed that the models were overparameterized, specifically because of the random slopes for framing condition per poem. An attempt to fit a zero-correlation-parameter model did not prevent overparameterization; we therefore fit a reduced model for each DV without the random slopes for framing condition.
ANOVA comparisons between the full and reduced models for each DV found that the reduced model provided at least as good a fit for 12 of the 14 DVs: all except “original” and “witty”. We therefore proceed with the reduced model.
We have a choice. We can allow AI to become yet another realm where tech companies operate with impunity and prioritize profit margins over people—including American children. Or we can learn from history and establish a framework of accountability from the outset. Liability has protected consumers—and families—in countless ways throughout the modern era. It's time to extend that protection to the frontier of AI.
For 9 of our 14 qualities, human authorship had a significant negative effect (p < 0.005), with poems written by human poets rated lower than poems generated by AI; for 4 qualities the effect was negative, but merely suggestive (0.05 < p < 0.005). The only quality for which there is not even a suggestive negative authorship effect is “original” (b = -0.16087, SE = 0.10183, df = 29.01975, t = -1.580, p = 0.1250). For 12 of our 14 qualities, the “told human” framing condition had a significant positive effect, and poems are rated more highly when participants are told that the poem is written by a human poet; for “inspiring” (b = 0.21902, SE = 0.11061, df = 693.00000, t = 1.980, p = 0.04808) and “witty” (b = 0.28140, SE = 0.12329, df = 693.00024, t = 2.282, p = 0.02277) the effect is merely suggestive. For all 14 models, the explanatory power is substantial (conditional R-squared > 0.47). Detailed analysis for all qualities can be found in our supplementary materials.
The hackers, identified as "Salt Typhoon." It is part of a larger colective called "Typhoon," which has several splinter cells, including Volt Typhoon and Flax Typhoon. Salt reportedly exploited vulnerabilities in the telecommunications networks to gather intelligence. While the bad actors presumably had carte blanche access to the systems, US officials said the compromised data only included private communications from a limited number of individuals, primarily those involved in government or political activities.
Factor analysis of qualitative ratings
As specified in our pre-registration, we planned to factor analyze responses to the following scales: moving, profound, witty, lyrical, inspiring, beautiful, meaningful, original. However, we found higher-than-expected correlations among all of our qualitative ratings; polychoric correlations ranged from 0.472 to 0.886, with a mean of 0.77. Therefore, we performed factor analysis on all 14 qualitative ratings. Parallel analysis suggested 4 factors. We performed a maximum likelihood factor analysis with an oblique rotation; factor scores were estimated using the ten Berge method21.
Although the agencies were reluctant to name names, CNN reported in the lead-up to the US presidential election that high-profile individuals, including President Donald Trump and running mate Senator JD Vance, may have been targeted as part of the hacking campaign. The hackers also copied information related to US law enforcement requests, potentially undermining critical ongoing investigations.
The CISA and the FBI emphasized that they continue to assist affected companies and encourage other organizations to report suspicious activity.
Factor 1 is most heavily weighted towards “beautiful,” “inspiring,” “meaningful,” “moving,” and “profound”; we take it to correspond to the poem’s emotional quality, and call it “Emotional Quality.” Factor 2 is most heavily weighted towards “rhythm,” “lyrical,” and “sound”; we take it to be the poem’s formal, including structural or metrical, quality, and call it “Formal Quality.” Factor 3 is most heavily weighted towards “imagery,” “mood or emotion,” and “theme”; we take it to reflect the poem’s ability to capture a particular poetic “Atmosphere,” and we call it “Atmosphere.” Factor 4 is most heavily weighted toward “witty” and “original”; we take it to reflect how creative or unique the poem is, and we call it “Creativity.” Fig. 3 shows the factor loadings for each qualitative dimension.
"[We] continue to render technical assistance, rapidly share information to assist other potential victims, and work to strengthen cyber defenses across the commercial communications sector," the agencies stated. "We encourage any organization that believes it might be a victim to engage its local FBI Field Office or CISA."
TechCrunch notes that the breach is the latest in a series of sophisticated cyberattacks attributed to China-linked "Typhoon" hacking groups targeting critical US infrastructure. Experts warn that the campaign demonstrates heightened strategic targeting by PRC-affiliated actors, who increasingly focus on sensitive government and communications systems.
China has denied involvement, with a spokesperson stating that the country "opposes cyberattacks in all forms." However, US officials and cybersecurity experts remain vigilant, warning of the potential for further espionage and disruptive activities.
For each of the four factors, we used a linear mixed effects regression to predict factor values for each participants’ rating of each poem, using the same fixed and random effects used for the 14 qualitative dimension DVs. We again found that the preregistered random effects overparameterized the models, and used the reduced models with no random slopes for framing condition.
Scam Detection is "private by design," Google emphasizes, ensuring users retain full control over the feature. It is disabled by default and can be turned off at any time, either for all calls or during specific ones. All data and voice processing occur entirely on the device, with no audio or transcripts stored or sent to Google's servers.
Google is positioning Scam Detection as a secure and significant improvement in the fight against mobile scams. The company estimates scammers collectively generate over $1 trillion annually, with phone calls remaining one of their most effective tools. As scam tactics grow increasingly sophisticated, Google has turned to AI to provide reliable protection for users.
We find that across all four factors, the explanatory power of the models is substantial (conditional R-squared > 0.5). The “told human” framing condition has a significant positive effect on all factors, and human authorship has a significant negative effect on 3 of the 4 factors. Figure 4 shows factor scores for human and AI authorship; Fig. 5 shows factor scores for each framing condition; the results for each of the 4 factor-prediction models, with the results for overall quality for comparison, can be found in Table 1.
Scam Detection relies on the Gemini Nano AI model to shield Pixel users from fraudulent calls effectively. However, Google also unveiled an additional security enhancement unrelated to AI. Google Play Protect is gaining real-time capabilities on Pixel 6+ devices. This updated service can now analyze running apps to detect potentially harmful behavior. Initially focused on identifying stalkerware apps, the feature will expand to target other malicious app categories in future updates.
Using qualitative ratings to predict discrimination
As in Study 1, we also used a mixed effects logistic regression (fit to a binomial distribution) to predict participant responses to the discrimination question (“written by a human” or “generated by AI”) for participants in the “told nothing” framing condition. We included authorship (human or AI), stimulus line count (scaled), stimulus all-lines-rhyme, and stimulus first-person as fixed effects, with random intercepts for participants (dropping stimulus quatrain and stimulus first-person from the model we used in Study 1 due to high multicollinearity in Study 2-poem’s smaller set of 10 poems).
The 4680 cells have already found their way into some Tesla vehicles, including the Cybertruck. Moreover, the company reached a significant milestone in September, announcing that it produced 100 million of these cells – an achievement that came just over three months after Tesla had announced producing 50 million 4680 cells, indicating a rapid acceleration in production.
Zeng and Musk reportedly clashed over Tesla's battery strategy during a heated debate in an April meeting. According to Zeng, Musk was silent in the face of Zeng's critique. "He doesn't know how to make a battery," Zeng told Reuters.
Evidence of the technology first surfaced in an ICEP 2024 presentation slide from Intel last June. The slide described how Clearwater Forest will employ 3D stacking to increase overall cache capacity and reduce latency between compute and memory – similar to how AMD's X3D chips boost gaming performance.
Mailslinger defended Intel's decision by acknowledging that gaming is not a significant mass-market focus for the company. While AMD's 3D V-Cache gives its processors a clear edge over Intel's in gaming scenarios, Mailslinger pointed out that this advantage does not extend to productivity applications – a sentiment echoed in our review of the 9800X3D.
Zeng believes Tesla lacks the expertise to successfully develop and manufacture the 4680 cells at scale. Indeed, scaling up production of the 4680 cells has proven difficult. Tesla has faced issues with the cells collapsing in on themselves during use. Other battery manufacturers like Panasonic have also cautioned about technical problems hindering mass production.
Also, Tesla's plan to employ dry electrode technology – an innovation aimed at reducing costs and improving efficiency – has not achieved the anticipated results at a mass production level. This technology was a cornerstone of Tesla's vision for the 4680 cells but has posed significant implementation challenges.
The success of Clearwater Forest may play a pivotal role in Intel's future. Production challenges and recent financial struggles have put CEO Pat Gelsinger's efforts to revitalize the company under intense scrutiny. The new server CPUs will introduce Intel's 18A process node, which will determine whether Intel's foundry business can return to competitiveness with industry leader TSMC.
Meanwhile, AMD continues to promise further performance gains with future X3D processors, potentially including the 9900X3D and 9950X3D. While details remain scarce, possible improvements could involve varying SRAM stack sizes or increasing cache capacity on high-end models.
Tesla and CATL maintain a complex and interdependent relationship. CATL supplies batteries for Tesla's vehicles produced in China, including models sold in North America. The Chinese company specializes in lithium iron phosphate (LFP) batteries, which, while generally offering less range than cylindrical cell units, provide advantages in cost and safety.
The disagreement between these two major players in the EV industry underscores the ongoing debate about the future of battery technology. While Tesla continues to heavily invest in its proprietary 4680 cells, CATL and other manufacturers are exploring alternative approaches, such as advancements in LFP batteries and the development of solid-state batteries.
As for the specific reasons behind this fall's PC exclusives, Spencer provided some context. The World of Warcraft expansion is limited to PC because the core game itself isn't on Xbox. The indie game Towerborne is sticking to PC in early access before expanding to other platforms later. And Ara: History Untold simplified development by launching first on Series X|S, with other versions coming down the road.
Zeng also touched on Musk's leadership style in the Reuters interview, particularly his tendency to set ambitious timelines. Zeng noted that Musk often promises delivery times that are shorter than realistically achievable, a strategy Musk reportedly employs to "push people." This approach has led to skepticism in the industry and among consumers, especially regarding promises about technologies like full self-driving.
However, Spencer reiterated that these are exceptions rather than the new norm, saying "I want the expectation to be that when we talk about a game, it's available every place our Xbox user is." He name-dropped plans to have the Diablo 4 DLC "Vessel of Hatred" playable across Xbox's full range of platforms as an example of their cross-ecosystem ambitions.
All this talk about expansion and an Xbox "ecosystem" will bring the Xbox handheld to mind, which we've been hearing about for years. It indeed is coming and the company is seriously pursuing the project, as confirmed by Spencer in a previous interview with Bloomberg. But the project remains in its early stages and an actual product may likely be years away.
Further adding to the confusion, the erroneous message includes an option for users to check for updates – even though the KB5046633 update is already the most recent version available.
For those encountering this misleading alert, experts recommend a simple workaround: logging out and restarting the system. This action might clear the incorrect notification. Alternatively, users can wait for Microsoft's server-side fix, which should address the issue automatically.
Another project the Xbox team has been working on in the way of expansion is a Microsoft mobile game store. While originally slated for a July launch, it's now been delayed indefinitely as the company "does additional research on the market."
It's worth noting that a legitimate end-of-support message was issued earlier this year, targeting users of Windows 11 22H2. Those still running this older version are strongly encouraged to upgrade, as they will no longer receive security or feature updates.
Microsoft has a notoriously poor track record with Windows updates. In this particular instance, no significant harm has been done, but it seems that every other month we hear about problematic updates introducing new bugs or crashes for users. As a result, it's generally advisable to take a more conservative approach to OS updates rather than immediately installing the latest ones.
In the last several months, Tesla has been using Optimus to serve drinks in public appearances in order to show off the robot’s developing capabilities. A series of videos shared on the company’s social media platforms has also provided regular updates about the product’s ability to perform tasks autonomously, walk long distances, and more.
The total explanatory power of the model was low (Conditional R2 = 0.024, Marginal R2 = 0.013), reflecting the expected difficulty of the discrimination task and the fact that, as a result, participants’ answers differed only slightly from chance. Consistent with the deviation from chance in overall accuracy, authorship was significantly predictive of participant responses (b = -0.27716, SE = 0.04889, z = -5.669, p < 0.0001): being written by a human poet decreased the likelihood that a participant would respond that the poem was written by a human poet. The odds that a human-written poem is judged to be human-written are roughly 75% that of an AI-generated poem being judged human-authored (OR = 0.758). Full results can be found in our supplementary materials.
The DC-ROMA RISC-V Mainboard is a limited initiative designed to give enterprise customers a taste of the RISC-V technology, DeepComputing confirmed. The company plans to start mass manufacturing new models of the motherboard in 2025, thus providing an upgrade path to early adopters. The Chinese venture is open to "valuable feedback," although they will only listen to customers who are paying for the aforementioned value-added services.
As an exploratory analysis, we refit the model with the addition of several variables reflecting structural features of the stimuli. Following10, which found that participants use flawed heuristics based on grammar and vocabulary cues to identify AI-generated texts, we examined whether participants look to structural and grammatical features of the poems to determine authorship. To test this, we added to the previous model stimulus word count (scaled), stimulus line count (scaled), stimulus all-lines-rhyme (a binary variable indicating whether or not all lines in the poem end with a rhyme), stimulus quatrain (a binary variable indicating whether the poem was formatted entirely in four-line stanzas, i.e., “quatrains”), and stimulus first person (a variable reflecting whether or not the poem was written in first person, with 3 values: “I” if written in singular first person, “we” if written in plural first person, and “no” if not written in first person).
The board is a limited initiative designed to give enterprise customers a taste of RISC-V technology, DeepComputing confirmed. The company plans to start mass manufacturing new models of the motherboard in 2025, thus providing an upgrade path to early adopters. The is asking for "valuable feedback," although they are only open to customers paying for the bundled value-added packages.
As expected, the total explanatory power of the model was low (Conditional R2 = 0.0024, Marginal R2 = 0.017). None of the structural features were significantly predictive, but both stimulus line count (b = 0.1461249, SE = 0.0661922, z = 2.208, p = 0.02727) and stimulus all-lines-rhyme (b = 0.2084246, SE = 0.0861658, z = 2.419, p = 0.01557) were suggestive. The effect of authorship (b = -0.1852979, SE = 0.0914278, z = -2.027, p = 0.04269) also appears to be somewhat weakened by the poem structural features; controlling for the structural features, the estimated odds of a human-authored poem being judged human-authored are roughly 83% that of an AI-generated poem (OR = 0.831).
This suggests that participants are using some shared heuristics to discriminate AI-generated poems from human-written poems; they may take AI to be less able to form rhymes, and less able to produce longer poems. If so, these heuristics are flawed; in our dataset, AI-generated poems are in fact more likely to rhyme at all lines: 89% of our AI-generated poems rhyme, while only 40% of our human-written poems rhyme. There is also no significant difference in average number of lines between AI-generated poems and human-written poems in our dataset.
DeepComputing has been making new computing hardware based on the RISC-V instruction set for quite some time. Earlier this year, the company built the first RISC-V laptop designed to run Ubuntu Linux using the same JH7110 SoC as the DC-ROMA RISC-V Mainboard. The DC-Roma RISC-V Pad II tablet, announced a few months ago, employs the K1 SoC developed by the Chinese company SpacemiT.
The effect of experience with poetry
We asked participants several questions to gauge their experience with poetry, including how much they like poetry, how frequently they read poetry, and their level of familiarity with their assigned poet. Overall, our participants reported a low level of experience with poetry: 90.4% of participants reported that they read poetry a few times per year or less, 55.8% described themselves as “not very familiar with poetry”, and 66.8% describe themselves as “not familiar at all” with their assigned poet. Full details of the participant responses to these questions can be found in table S1 in our supplementary materials.
Tesla built its 10,000th Megapack in Lathrop earlier this month, indicating that the company is progressing toward its annual production target, after it broke ground on the site in 2021 and began production in 2022. The company’s energy division has also been growing into its highest margin business, and sales growth in the sector has been outpacing that of Tesla’s automotive business—both of which were predicted by Elon Musk going into 2024.
As of the end of Q3, Tesla announced that it deployed a record-breaking 20.3 GWh of battery storage projects in the nine-month period, which was already more than the company deployed throughout all of 2023 (14.7 GWh). The figure, split between the Megapack and the Powerwall home batteries, was made up of 6.9 GWh deployed in the third quarter, 9.4 GWh in the second, and 4.1 GWh in the first three months of the year.
In order to determine if experience with poetry improves discrimination accuracy, we ran an exploratory model using variables for participants’ answers to our poetry background and demographics questions. We included self-reported confidence, familiarity with the assigned poet, background in poetry, frequency of reading poetry, how much participants like poetry, whether or not they had ever taken a poetry course, age, gender, education level, and whether or not they had seen any of the poems before.
Confidence was scaled, and we treated poet familiarity, poetry background, read frequency, liking poetry, and education level as ordered factors. We used this model to predict not whether participants answered “AI” or “human,” but whether participants answered the question correctly (e.g., answered “generated by AI” when the poem was actually generated by AI). As specified in our pre-registration, we predicted that participant expertise or familiarity with poetry would make no difference in discrimination performance. This was largely confirmed; the explanatory power of the model was low (McFadden’s R2 = 0.012), and none of the effects measuring poetry experience had a significant positive effect on accuracy. Confidence had a small but significant negative effect (b = -0.021673, SE = 0.003986, z = -5.437, p < 0.0001), indicating that participants were slightly more likely to guess incorrectly when they were more confident in their answer.
Luckily, we've proven that we can hold companies liable for the harm that they cause. In 1972, 13-year-old Richard Grimshaw suffered severe burns when a defective Ford Pinto's gas tank erupted in flames. Grimshaw's lawsuit against the Ford Motor Company resulted in the largest product liability award in U.S. history up to that point, forever altering the auto industry's approach to risk. Grimshaw's tragedy became a watershed in American consumer safety.
We find two positive effects on discrimination accuracy: gender, specifically “non-binary/third gender” (b = 0.169080, SE = 0.030607, z = 5.524, p < 0.0001), and having seen any of the poems before (b = 0.060356, SE = 0.016726, z = 3.608, p = 0.000309). These effects are very small; having seen poems before only increases the odds of a correct answer by 6% (OR = 1.062). These findings suggest that experience with poetry did not improve discrimination performance unless that experience allowed them to recognize the specific poems used in the study. In summary, Study 1 showed that human-out-of-the-loop AI-generated poetry is judged to be human-written more often than poetry written by actual human poets, and that experience with poetry does not improve discrimination performance.
Our results contrast with those of previous studies, in which participants were able to distinguish the poems of professional poets from human-out-of-the-loop AI-generated poetry16, or that participants are at chance in distinguishing human poetry from human-out-of-the-loop AI-generated poetry17. Past research has suggested that AI-generated poetry needs human intervention to seem human-written to non-expert participants, but recent advances in LLMs have achieved a new state-of-the-art in human-out-of-the-loop AI poetry that now, to our participants, seems “more human than human.”
Today, companies like OpenAI, Meta, Microsoft, Amazon, Character.AI, and Google operate in a liability-free zone. This lack of accountability means these companies have little incentive to thoroughly test their products for potential harms before releasing them to the public. Without legal consequences, they are able to treat society as a testing ground for their latest innovations, a practice that is particularly egregious when it comes to the most vulnerable members of our society: our children. This accountability vacuum has allowed unchecked experimentation with our democracy, mental health, and privacy.
Study 2: evaluating AI-generated and human-generated poems
Our second study asks participants to rate each poem’s overall quality, rhythm, imagery, sound; the extent to which the poem was moving, profound, witty, lyrical, inspiring, beautiful, meaningful, and original; and how well the poem conveyed a specific theme, and how well it conveyed a specific mood or emotion. Each of these was reported on a 7-point Likert scale. In addition to these 14 qualitative assessments (which were selected by examining rules for “poetry explication”; see, e.g.,20), participants also answered whether the poem rhymed, with choices “no, not at all,” “yes, but badly,” and “yes, it rhymes well.”
Luckily, we've proven that we can hold companies liable for the harm that they cause. In 1972, 13-year-old Richard Grimshaw suffered severe burns when a defective Ford Pinto's gas tank erupted in flames. Grimshaw's lawsuit against the Ford Motor Company resulted in the largest product liability award in U.S. history up to that point, forever altering the auto industry's approach to risk. Grimshaw's tragedy became a watershed in American consumer safety.
Today, product liability is the invisible structure underpinning our lives as consumers and citizens, and is what protects our kids from harm. Liability helps "see" and prevent harms that even the most alert parents may not be able to anticipate. Liability is the reason we can buy toys at the store for our children without worrying about hidden dangers lurking inside the plastic clamshell packaging, or trust that a toddler's car seat will actually help prevent injuries in the event of an accident.
As specified in our pre-registration (https://osf.io/82h3m), we predicted (1) that participants’ assessments would be more positive when told the poem is human-written than when told the poem is AI-generated, and (2) that a poem’s actual authorship (human or AI) would make no difference in participants’ assessments. We also predicted that expertise in poetry (as measured by the self-reported experience with poetry) would make no difference in assessments.
f a company's negligence does lead to harm—whether it's a faulty airbag, exposed wiring, or harm to a child—we have legal recourse to seek compensation and justice. The threat of legal action compels companies to design and build safer products from the outset.
Today, as American families face powerful new technologies, the tech industry lobbies to remain exempt from accountability. They're aided by a judiciary that has favored their expansive interpretation of Section 230 of the Communications Decency Act and their weaponization of the First Amendment. When it comes to product liability, the tech industry has taken us backward in history, with "caveat emptor"—or "buyer beware"—now dominating our modern, digital lives.
Ratings of overall quality of the poems are lower when participants are told the poem is generated by AI than when told the poem is written by a human poet (two-sided Welch’s t(4571.552) = –17.398, p < 0.0001, pBonf < 0.0001, Meandifference = –0.814, Cohen’s d = -0.508, 99.5% CI –0.945 to –0.683), confirming earlier findings that participants are biased against AI authorship2,7,15. However, contrary to earlier work14,16,17 we find that ratings of overall quality are higher for AI-generated poems than they are for human-written poems (two-sided Welch’s t(6618.345) = 27.991, p < 0.0001, pBonf < 0.0001, Meandifference = 1.045, Cohen’s d = 0.671, 99.5% CI 0.941 to 1.150); Fig. 1 compares the ratings distributions for AI-generated poems and human-written poems. The same phenomenon – where ratings are significantly lower when told the poem is AI-generated but are significantly higher when the poem is actually AI-generated – holds for 13 of our 14 qualitative ratings.
The exception is “original”; poems are rated as less original when participants are told the poem is generated by AI vs. being told the poem is written by a human (two-sided Welch’s t(4654.412) = -16.333, p < 0.0001, pBonf < 0.0001, Meandifference = -0.699, Cohen’s d = -0.478, 99.5% CI –0.819 to –0.579), but originality ratings for actually AI-generated poems are not significantly higher than for actually human-written poems (two-sided Welch’s t(6957.818) = 1.654, p = 0.098, pBonf = 1.000, Meandifference = 0.059, Cohen’s d = 0.040, 99.5% CI –0.041 to 0.160). The largest effect is on “rhythm”: AI-generated poems are rated as having much better rhythm than the poems written by famous poets (two-sided Welch’s t(6694.647) = 35.319, p < 0.0001, pBonf < 0.0001, Meandifference = 1.168, Cohen’s d = 0.847, 99.5% CI 1.075 to 1.260). This is remarkably consistent; as seen in Fig. 2, all 5 AI-generated poems are rated more highly in overall quality than all 5 human-authored poems.
Before these harms accelerate and touch more lives, Congress and state legislatures must act to make clear that tech companies have a clear duty to exercise reasonable care in the design of their products. This duty forms the core of legal liability that every other successful American industry abides by.
By applying the same laws to tech companies that already apply to other manufacturers, we're not stifling innovation. We are channeling the sort of innovation that has led American companies to become industry leaders for decades, and allows families to feel safe using American products. A framework of liability will foster public trust, which is essential for the widespread adoption and success of AI technologies.
We used a linear mixed effects model to predict the Likert scale ratings for each of our 14 qualitative dimensions. We used poem authorship (human or AI), framing condition (told human, told AI, or told nothing), and their interaction as fixed effects. As specified in our preregistration, we initially planned to include four random effects: random intercepts per participant, random slope of poem authorship per participant, random intercept per poem, and random slope of framing condition per poem. As in Study 1, we followed19 in checking the models for overparameterization; PCA dimensionality reduction revealed that the models were overparameterized, specifically because of the random slopes for framing condition per poem. An attempt to fit a zero-correlation-parameter model did not prevent overparameterization; we therefore fit a reduced model for each DV without the random slopes for framing condition.
ANOVA comparisons between the full and reduced models for each DV found that the reduced model provided at least as good a fit for 12 of the 14 DVs: all except “original” and “witty”. We therefore proceed with the reduced model.
We have a choice. We can allow AI to become yet another realm where tech companies operate with impunity and prioritize profit margins over people—including American children. Or we can learn from history and establish a framework of accountability from the outset. Liability has protected consumers—and families—in countless ways throughout the modern era. It's time to extend that protection to the frontier of AI.
For 9 of our 14 qualities, human authorship had a significant negative effect (p < 0.005), with poems written by human poets rated lower than poems generated by AI; for 4 qualities the effect was negative, but merely suggestive (0.05 < p < 0.005). The only quality for which there is not even a suggestive negative authorship effect is “original” (b = -0.16087, SE = 0.10183, df = 29.01975, t = -1.580, p = 0.1250). For 12 of our 14 qualities, the “told human” framing condition had a significant positive effect, and poems are rated more highly when participants are told that the poem is written by a human poet; for “inspiring” (b = 0.21902, SE = 0.11061, df = 693.00000, t = 1.980, p = 0.04808) and “witty” (b = 0.28140, SE = 0.12329, df = 693.00024, t = 2.282, p = 0.02277) the effect is merely suggestive. For all 14 models, the explanatory power is substantial (conditional R-squared > 0.47). Detailed analysis for all qualities can be found in our supplementary materials.
The hackers, identified as "Salt Typhoon." It is part of a larger colective called "Typhoon," which has several splinter cells, including Volt Typhoon and Flax Typhoon. Salt reportedly exploited vulnerabilities in the telecommunications networks to gather intelligence. While the bad actors presumably had carte blanche access to the systems, US officials said the compromised data only included private communications from a limited number of individuals, primarily those involved in government or political activities.
Factor analysis of qualitative ratings
As specified in our pre-registration, we planned to factor analyze responses to the following scales: moving, profound, witty, lyrical, inspiring, beautiful, meaningful, original. However, we found higher-than-expected correlations among all of our qualitative ratings; polychoric correlations ranged from 0.472 to 0.886, with a mean of 0.77. Therefore, we performed factor analysis on all 14 qualitative ratings. Parallel analysis suggested 4 factors. We performed a maximum likelihood factor analysis with an oblique rotation; factor scores were estimated using the ten Berge method21.
Although the agencies were reluctant to name names, CNN reported in the lead-up to the US presidential election that high-profile individuals, including President Donald Trump and running mate Senator JD Vance, may have been targeted as part of the hacking campaign. The hackers also copied information related to US law enforcement requests, potentially undermining critical ongoing investigations.
The CISA and the FBI emphasized that they continue to assist affected companies and encourage other organizations to report suspicious activity.
Factor 1 is most heavily weighted towards “beautiful,” “inspiring,” “meaningful,” “moving,” and “profound”; we take it to correspond to the poem’s emotional quality, and call it “Emotional Quality.” Factor 2 is most heavily weighted towards “rhythm,” “lyrical,” and “sound”; we take it to be the poem’s formal, including structural or metrical, quality, and call it “Formal Quality.” Factor 3 is most heavily weighted towards “imagery,” “mood or emotion,” and “theme”; we take it to reflect the poem’s ability to capture a particular poetic “Atmosphere,” and we call it “Atmosphere.” Factor 4 is most heavily weighted toward “witty” and “original”; we take it to reflect how creative or unique the poem is, and we call it “Creativity.” Fig. 3 shows the factor loadings for each qualitative dimension.
"[We] continue to render technical assistance, rapidly share information to assist other potential victims, and work to strengthen cyber defenses across the commercial communications sector," the agencies stated. "We encourage any organization that believes it might be a victim to engage its local FBI Field Office or CISA."
TechCrunch notes that the breach is the latest in a series of sophisticated cyberattacks attributed to China-linked "Typhoon" hacking groups targeting critical US infrastructure. Experts warn that the campaign demonstrates heightened strategic targeting by PRC-affiliated actors, who increasingly focus on sensitive government and communications systems.
China has denied involvement, with a spokesperson stating that the country "opposes cyberattacks in all forms." However, US officials and cybersecurity experts remain vigilant, warning of the potential for further espionage and disruptive activities.
For each of the four factors, we used a linear mixed effects regression to predict factor values for each participants’ rating of each poem, using the same fixed and random effects used for the 14 qualitative dimension DVs. We again found that the preregistered random effects overparameterized the models, and used the reduced models with no random slopes for framing condition.
Scam Detection is "private by design," Google emphasizes, ensuring users retain full control over the feature. It is disabled by default and can be turned off at any time, either for all calls or during specific ones. All data and voice processing occur entirely on the device, with no audio or transcripts stored or sent to Google's servers.
Google is positioning Scam Detection as a secure and significant improvement in the fight against mobile scams. The company estimates scammers collectively generate over $1 trillion annually, with phone calls remaining one of their most effective tools. As scam tactics grow increasingly sophisticated, Google has turned to AI to provide reliable protection for users.
We find that across all four factors, the explanatory power of the models is substantial (conditional R-squared > 0.5). The “told human” framing condition has a significant positive effect on all factors, and human authorship has a significant negative effect on 3 of the 4 factors. Figure 4 shows factor scores for human and AI authorship; Fig. 5 shows factor scores for each framing condition; the results for each of the 4 factor-prediction models, with the results for overall quality for comparison, can be found in Table 1.
Scam Detection relies on the Gemini Nano AI model to shield Pixel users from fraudulent calls effectively. However, Google also unveiled an additional security enhancement unrelated to AI. Google Play Protect is gaining real-time capabilities on Pixel 6+ devices. This updated service can now analyze running apps to detect potentially harmful behavior. Initially focused on identifying stalkerware apps, the feature will expand to target other malicious app categories in future updates.
Using qualitative ratings to predict discrimination
As in Study 1, we also used a mixed effects logistic regression (fit to a binomial distribution) to predict participant responses to the discrimination question (“written by a human” or “generated by AI”) for participants in the “told nothing” framing condition. We included authorship (human or AI), stimulus line count (scaled), stimulus all-lines-rhyme, and stimulus first-person as fixed effects, with random intercepts for participants (dropping stimulus quatrain and stimulus first-person from the model we used in Study 1 due to high multicollinearity in Study 2-poem’s smaller set of 10 poems).