USPTO Shifts Tone on AI Patents with August 2025 Eligibility Memo
James Denaro / CipherLaw
The August 4, 2025 USPTO memorandum establishes that examiners must show greater than 50% probability of ineligibility before rejecting AI patent claims, explicitly prohibiting rejections based on uncertainty alone. This "close call" standard represents the most significant practical shift for AI patent prosecution, even though the memo formally claims no new policy. The guidance directly responds to the Federal Circuit's restrictive April 2025 Recentive Analytics decision by clarifying that broadly-claimed neural network training remains eligible while limiting the "mental process" category that threatened to categorize hardware-based AI operations as abstract. For in-house counsel, this creates a more favorable examination environment while reinforcing that genuine technological improvements to machine learning itself—not merely applying generic ML to new fields—remain essential for both USPTO prosecution and litigation survival.
The memo arrives at a critical juncture. Following the July 2024 AI guidance that created significant confusion about neural network training eligibility, and the Recentive decision that invalidated patents merely applying conventional ML to entertainment industry data, the USPTO needed to prevent examiner over-correction while maintaining rigor. Deputy Commissioner Charles Kim's five-page memorandum to software-examining Technology Centers (2100, 2600, and 3600) accomplishes this by reinforcing boundaries rather than announcing new tests, though practitioners widely recognize this as substantively reshaping the examination landscape for AI applications.
Four critical clarifications reshape AI patent examination
The memo provides targeted guidance on four recurring examination challenges, each addressing specific practitioner pain points that emerged after the July 2024 AI guidance.
The mental process grouping now has explicit boundaries. The memo directly warns examiners "not to expand this grouping in a manner that encompasses claim limitations that cannot practically be performed in the human mind." This matters because the mental process category—covering observations, evaluations, judgments, and opinions—had been applied inconsistently to AI operations. The USPTO clarifies that AI operations performed in ways that cannot practically be performed in the human mind do not fall within this grouping. For example, generating word sequences from spectral features extracted from speech signals exceeds human mental capability and thus cannot be rejected as a mental process. This limitation is crucial for hardware-based AI implementations, multidimensional matrix operations in deep learning, and specialized processor operations that examiners might otherwise categorize as "mental" simply because they involve pattern recognition or decision-making concepts.
The "recites" versus "merely involves" distinction restores broad neural network claim eligibility. This clarification resolves the most significant confusion from July 2024 guidance by distinguishing two seemingly contradictory examples. Example 39's claim to "training the neural network in a first stage using the first training set" does NOT recite a judicial exception because, even though training involves mathematical techniques, the limitation does not set forth or describe any mathematical relationships, calculations, formulas, or equations using words or mathematical symbols. In contrast, Example 47's claim explicitly requiring "a backpropagation algorithm and a gradient descent algorithm" DOES recite mathematical concepts by naming specific algorithms. The practical impact is substantial: before July 2024, neural network training was generally considered eligible at the USPTO; after July 2024, such claims faced heightened scrutiny; the August 2025 memo effectively restores the pre-July 2024 eligibility standard for broadly-claimed training. Patent practitioners can now claim neural network training at a high level without triggering abstract idea analysis, provided they avoid explicitly naming mathematical algorithms.
Step 2A Prong Two analysis must consider claims holistically, not element-by-element. The memo emphasizes that additional elements should not be evaluated "in a vacuum, completely separate from the recited judicial exception." Instead, examiners must consider "all the claim limitations and how these limitations interact and impact each other" when determining whether an exception is integrated into a practical application. This addresses the common examiner practice of isolating individual claim elements and dismissing them as conventional, rather than evaluating how they work together. For AI patents specifically, this means examiners must consider how training algorithms, data structures, hardware components, and application-specific implementations interact to create practical applications. The memo reinforces that specifications need not explicitly state improvements if they describe inventions such that improvements would be apparent to one of ordinary skill in the art, providing relief for applications drafted before the July 2024 guidance that may not explicitly characterize AI techniques as "improvements."
The "close call" standard establishes a preponderance threshold for eligibility rejections. This provision, characterized by practitioners as "perhaps the most applicant-friendly point," explicitly requires that rejections should only be made when ineligibility is more likely than not (exceeding 50% probability). The memo states clearly: "A rejection of a claim should not be made simply because an examiner is uncertain as to the claim's eligibility." This represents a significant procedural shift from prior practice where examiners could reject claims based on uncertainty about eligibility. The memo frames this within the broader preponderance of evidence standard that applies to all patentability rejections under 35 U.S.C. §§ 101, 102, 103, and 112, but the explicit articulation for § 101—combined with the specific "close call" framing—provides concrete argumentation tools for practitioners. This standard gives applicants powerful language to challenge rejections in borderline cases and creates a higher bar for initial examiner rejections.
Recentive Analytics prompted urgent USPTO guidance to prevent overcorrection
The Federal Circuit's April 18, 2025 decision in Recentive Analytics v. Fox Corp. created the immediate context necessitating USPTO clarification. The precedential opinion established a strict standard: claims that do no more than apply established methods of machine learning to a new data environment are patent ineligible. The court invalidated four patents covering dynamically optimized event scheduling and television network mapping that used generic machine learning techniques applied to entertainment industry data.
Recentive's key holdings directly threatened broad categories of AI patent applications. The court ruled that standard features of machine learning—iterative training on data, dynamic adjustments based on real-time input, using training data to identify patterns—are "incident to the very nature of machine learning" and thus do not provide an inventive concept. The decision emphasized that performing human tasks with greater speed and efficiency through generic computing does not confer patent eligibility at either Alice step one or step two. Critically, Recentive admitted it was "not claiming machine learning itself" and did "not claim a specific method for improving the mathematical algorithm or making machine learning better"—the patents merely used "any suitable machine learning technique" in a new environment. The Federal Circuit found this insufficient, holding that functional claim language "without disclosing how to implement that concept risks defeating the very purpose of the patent system."
The USPTO's August memo responds by establishing guardrails against over-application of Recentive while maintaining its core principle. The memo's emphasis on analyzing claims as a whole, recognizing that specifications need not explicitly state improvements, and limiting mental process categorization all work to prevent examiners from reflexively applying Recentive to reject any AI claim involving conventional techniques. However, the memo also reinforces Recentive's fundamental teaching by distinguishing between claims that improve machine learning technology itself versus claims that merely apply ML to new fields. The memo explicitly cites Recentive as an example of claims that fail at Step 2A Prong Two because steps were "incidental to automating an abstract idea" rather than reflecting genuine technological improvements.
The tension between Recentive and the memo creates strategic uncertainty for patent holders. While USPTO examination may become more favorable following the memo's guidance, Federal Circuit litigation will continue applying Recentive's strict standard. This divergence means patents that survive examination under the memo's framework may still face validity challenges in litigation under Recentive. The Federal Circuit explicitly stated that machine learning "may lead to patent-eligible improvements in technology" but made clear that improvements must involve "specific implementation" with "steps through which the machine learning technology achieves an improvement"—a standard that may prove difficult to meet even with the memo's more flexible specification requirements.
Filing decisions now require explicit Section 101 risk assessment
For in-house counsel making protection strategy decisions, the August 2025 memo creates a more favorable prosecution environment while simultaneously highlighting inventions that face heightened risk regardless of USPTO guidance.
Patent versus trade secret analysis must now explicitly incorporate eligibility likelihood. Practitioners uniformly recommend that companies evaluate § 101 risk upfront when deciding protection strategies. AI applications that apply conventional ML techniques to business problems—even if novel and nonobvious—face substantial eligibility risk under Recentive that USPTO guidance cannot eliminate. These inventions may be better protected as trade secrets, particularly where the innovation involves proprietary training data, application-specific optimizations, or techniques that competitors cannot easily reverse-engineer. Companies should prioritize patent protection for inventions demonstrating measurable technical advances in system performance, efficiency, accuracy, or other quantifiable metrics, and where the invention solves concrete technical problems rather than purely business challenges.
Priority filing decisions should emphasize inventions with genuine technological improvements. The converging message from both Recentive and the USPTO memo is that improvements to machine learning technology itself receive strong protection, while applications of generic ML to new domains face significant hurdles. In-house counsel should prioritize patent applications for:
Novel neural network architectures or model structures
Improved training algorithms or methodologies beyond backpropagation and gradient descent
Specific data preprocessing or feature engineering techniques that enhance ML performance
Hardware-software integration that enables AI operations impossible with generic computing
Solutions to technical problems in existing ML systems (accuracy, overfitting, computational efficiency, latency)
Conversely, applications that primarily claim "use ML to optimize X in industry Y" without technical improvements to the ML itself should be evaluated skeptically for patent prosecution, regardless of business value or the novelty of applying ML in that specific industry.
Cost-benefit analysis must account for prosecution uncertainty and litigation risk. Even with the memo's more favorable examination guidance, practitioners note significant implementation gaps. Analysis of the 50 most recent Patent Trial and Appeal Board decisions from Technology Center 3600 (covering July 17–August 12, 2025) involving only § 101 rejections showed examiners were affirmed in 46 of 50 cases, with only one reversal favoring applicants. This data, compiled after the memo's issuance, suggests examiners may not be following the guidance consistently, or that the PTAB has not yet aligned with the memo's approach. For in-house counsel, this means prosecution costs may remain high for AI applications in borderline eligibility territory, and patents that successfully navigate examination may still face validity challenges in litigation where courts apply Recentive directly.
Drafting strategies must balance specificity with abstraction avoidance
The memo's guidance creates specific tactical considerations for patent drafting that differ significantly from traditional best practices in other technology areas.
Avoid explicitly naming mathematical algorithms in independent claims unless essential for novelty. The recite/involve distinction establishes that claims can describe neural network training, optimization processes, or data analysis operations at a high level without triggering abstract idea analysis, but explicitly naming "backpropagation algorithm," "gradient descent algorithm," "support vector machines," or other specific mathematical techniques causes claims to "recite" mathematical concepts requiring full Step 2A analysis. This creates a counterintuitive claiming strategy: broader claims avoiding algorithmic specificity may be more defensible under § 101 than narrower claims explicitly reciting mathematical techniques. Practitioners characterize this as potentially "arbitrary and capricious, or at the very least illogical and ill-conceived," since dependent claims adding specific mathematical methods could be ineligible while broader independent claims remain eligible. Nevertheless, this is the framework examiners must apply.
The practical drafting approach involves claiming at functional levels that describe what the system accomplishes rather than mathematical how it's achieved. Claim "training a neural network using training data to generate a trained model that classifies input data" rather than "training a neural network using backpropagation and gradient descent algorithms." Use terminology like "machine learning model," "training process," "optimization," and "classification" without mathematical elaboration in independent claims, then add specificity through dependent claims that provide fallback positions if needed.
Specifications must demonstrate technological improvements apparent to skilled artisans, even if not explicit. The memo clarifies that "the specification does not need to explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art." This provides relief for older applications drafted before the July 2024 guidance that may not explicitly characterize inventions as "improvements." However, specifications should still describe the current state of technology, identify technical problems or limitations, and explain how the invention provides measurable advances. For AI patents specifically, this means describing:
The technical problem being addressed (not just business problem)
How conventional approaches fall short (computational limitations, accuracy problems, latency issues)
Specific technical mechanisms that achieve improvements (architectural choices, training modifications, data handling techniques)
Measurable results or expected performance gains (even if quantified results aren't always required)
The specification should enable a skilled artisan to understand why the invention represents a technical advance over prior AI systems, even without explicitly stating "this invention improves computer functioning." This implicit improvement standard provides flexibility while requiring substantive technical disclosure.
Include robust dependent claims with specific technical features and hardware integration. While independent claims should avoid explicitly reciting mathematical algorithms, dependent claims should add layers of technical specificity that demonstrate non-generic implementation. Include dependent claims specifying particular machine types, specialized processors (GPUs, TPUs, neuromorphic hardware), specific data structures, memory architectures, real-time processing constraints, hardware-software interaction mechanisms, and concrete implementation details. These dependent claims serve multiple purposes: providing fallback positions if independent claims face rejection, demonstrating that the invention involves more than generic computer implementation, and supporting arguments that claims as a whole integrate judicial exceptions into practical applications.
Practitioner perspectives reveal optimism tempered by fundamental concerns
The patent community's reaction to the August 2025 memo reflects cautious optimism about near-term prosecution improvements alongside persistent concerns about the fundamental instability of § 101 jurisprudence.
The "tie goes to the runner" interpretation suggests meaningful examination tone shift. Practitioners characterize the close call standard as establishing a "tie goes to the runner" approach where applicants receive the benefit of doubt in borderline cases. This represents a philosophical shift from prior practice where uncertainty about eligibility could justify rejection. The explicit 50% threshold and preponderance language provides concrete argumentation tools, and practitioners universally recommend citing these provisions when responding to § 101 rejections. The memo's repeated emphasis on avoiding over-expansive application of abstract idea groupings, analyzing claims holistically rather than dissecting elements in isolation, and recognizing improvements that need not be explicit all reinforce a more applicant-friendly examination posture. Practitioners describe this as creating a "more favorable examination environment and opportunities to overcome previously problematic rejections."
Critical implementation gaps threaten to undermine memo's practical impact. Despite optimistic reception, practitioners identify significant concerns about actual implementation. The PTAB data showing 46 affirmances out of 50 appeals in the period around the memo suggests examiners may not be applying the guidance consistently. Multiple practitioners note that while the memo provides useful language for arguments, it "is not binding law" and federal courts remain free to apply stricter standards. The fundamental concern is that administrative guidance cannot resolve judicial inconsistency—the Supreme Court's Alice/Mayo framework remains the governing law, and USPTO memos cannot override Federal Circuit precedents like Recentive. One subsequent development provides tentative optimism: following John Squires' confirmation as USPTO Director in September 2025, the agency vacated a PTAB § 101 rejection of a Google AI patent in Ex Parte Desjardins, with Director Squires finding the rejection "troubling" given AI's importance to U.S. interests. This suggests leadership commitment to scrutinizing AI claim rejections, though whether this represents systematic change remains uncertain.
Congressional action remains necessary for fundamental reform. Practitioners across the political spectrum agree that administrative fixes cannot resolve the fundamental problems with § 101 jurisprudence. Multiple commentators emphasize that "Congress must step in to create lasting changes, as current protocols stifle innovation, particularly the innovations made by smaller entities." The core complaint is that § 101 doctrine has become "anything but uniform, consistent and predictable," with outcomes depending heavily on which Federal Circuit panel hears a case and how broadly or narrowly judges characterize claimed inventions. The memo's attempt to distinguish Example 39 from Example 47 highlights this instability—practitioners find the distinction "difficult—if not impossible—to reconcile" and note the logical problem that "it should be impossible for a broad generic claim to be patent eligible while a more narrow dependent claim is found to be ineligible, but that is what the memo seems to say." Until Congress provides statutory clarity on software patent eligibility, these tensions will persist regardless of USPTO administrative guidance.
Conclusion: Tactical improvements amid strategic uncertainty
The August 4, 2025 USPTO memorandum provides meaningful tactical advantages for AI patent prosecution through its explicit close call standard, limits on mental process categorization, restoration of broad neural network training claim eligibility, and requirement for holistic claim analysis. In-house counsel should leverage these clarifications by citing the memo in responses to § 101 rejections, emphasizing the preponderance standard in borderline cases, and ensuring specifications describe technological improvements apparent to skilled artisans even if not explicit.
However, the memo operates within—rather than resolves—the fundamental instability of § 101 jurisprudence. The Recentive Analytics decision establishes that claims merely applying generic machine learning to new data environments remain ineligible regardless of USPTO examination outcomes, creating a prosecution-litigation divergence that strategic patent planning must account for. The counterintuitive claiming strategy of avoiding algorithmic specificity to prevent "reciting" abstract ideas, while logically problematic, reflects the current legal landscape practitioners must navigate.
For filing decisions, the critical insight is that genuine technological improvements to machine learning systems themselves receive robust protection under both USPTO guidance and Federal Circuit precedent, while applications of conventional ML to new industries face substantial risk that favorable examination cannot eliminate. In-house counsel should explicitly incorporate § 101 likelihood into protection strategy decisions upfront, prioritizing patents for technical AI advances while considering trade secrets for business-focused ML applications. The memo creates a more navigable examination process, but only Congressional action can provide the predictability and consistency that AI innovation ultimately requires.