React2Shell arrived with more noise than signal. A sprawling patch landed, researchers rushed to reverse engineer it, and the community started chasing misleading clues. The result was a wave of incorrect assumptions, shaky PoCs, and wasted cycles. This breakdown clarifies the technical reality and cuts through the confusion.
TL;DR
Context
React’s ~700-line patch for CVE-2025-55182 bundled the real fix with a large set of unrelated changes. That made it difficult to see what was actually being fixed and led to a wave of incorrect early analysis across the industry.
What the community got wrong
- The $F primitive and the loadServerReference code path looked suspicious in the patch, but they were distractions. They never formed a real exploit path.
- Researchers chased what looked like “property traversal” gadgets in requireModule because they resembled familiar JavaScript exploitation patterns. In practice, those paths could not be chained into a working attack.
- Many early PoCs ran only because they mocked internal server behavior (like fake vm or fs modules), and didn’t run real application servers. These scenarios don’t occur in real applications, which is why the PoCs didn’t reproduce outside controlled demos.
What’s actually true
- The vulnerability triggers much earlier in the request lifecycle than expected: during handling of a React Server Component (RSC) request, before any server function or action is validated.
- A real attack needs a multipart/form-data payload plus specific Flight protocol operators (
$@,$B,$n). These operators let attackers smuggle unexpected values through the RSC request parser. - Any server that processes RSC requests is exposed, even if it defines no Server Functions at all. The issue appears before function resolution.
- Traditional SSR (server-side rendering) setups that do not process RSC requests are not affected.
AI Limitations
AI speeds up research, but it also accelerates misdirection when the underlying system is complex. React2Shell showed how quickly teams can be led into “plausible but wrong” theories generated by AI or unvalidated PoCs. Use AI as an assistant, not an authority, and anchor decisions in verified behavior.
.webp)
Technical Deep Dive
Background: A Patch Designed to Confuse
When the React2Shell vulnerability (CVE-2025-55182) dropped, the React maintainers (including sebmarkbage) did something smart: they didn’t just fix the bug. They released a big patch - approximately 700 lines of code.
This patch wasn't just a quick fix. It included unrelated code changes, general hardening of deserialization flows, and structural shifts. The goal was clear: obfuscation. By diluting the critical fix in a sea of improvements, they bought the ecosystem time. And time it bought - it took roughly 30 hours from the patch to the first public payload.
However, this strategy had a side effect. It confused security teams, researchers, and LLMs. We were all racing to exploit, but the noise was deafening. Personally, we were victims to the kerfuffle - what did we understand wrong? What did we miss? How come this PoC claims to operate in a certain way that defies our current research?
loadServerReference: A tempting misdirect
When looking through the patch in react, something that immediately popped up was this diff:
@@ -228,5 +230,8 @@ export function requireModule<T>(metadata: ClientReference<T>): T {
// default property of this if it was an ESM interop module.
return moduleExports.__esModule ? moduleExports.default : moduleExports;
}
- return moduleExports[metadata[NAME]];
+ if (hasOwnProperty.call(moduleExports, metadata[NAME])) {
+ return moduleExports[metadata[NAME]];
+ }
+ return (undefined: any);
}
A patch in a function called requireModule, already enough to pique anyone’s interest. What’s more, it’s looking like a very tempting primitive, prototype property lookups:
var o = {
foo: 4,
};
// We can access properties on the object
o.foo // 4
// But also properties not directly on the object
o.hasOwnProperty('foo'); // true
The problem is when we can access whatever property we want:
var userInput1 = 'hasOwnProperty';
var userInput2 = 'constructor';
var o = {
foo: 4
};
var res1 = o[userInput1]
// res1 === o.hasOwnProperty
var res2 = res1[userInput2];
// res2 === h.hasOwnProperty.constructor === Function
We just got access to an object that wasn’t expected - the Function constructor. By itself, it’s not bad, but it’s a very interesting primitive, and this specific diff looked like a good gadget to exploit.
Let’s roll up our sleeves and figure out how to invoke it. As we’ve written before, the Flight protocol supports multiple different operators like:
- Simple
$: Strings $1: References to other chunks$u: The value undefined$@: Raw chunk reference$B: Blobs
And so forth. One of these special operators is $F :
case 'F': {
// Server Reference
const ref = value.slice(2);
// TODO: Just encode this in the reference inline instead of as a model.
const metaData: {id: ServerReferenceId, bound: Thenable<Array<any>>} =
getOutlinedModel(response, ref, obj, key, createModel);
// ^-----v
return loadServerReference(
response,
metaData.id,
metaData.bound,
initializingChunk,
obj,
key,
);
}
The loadServerReference function is interesting because it calls requireModule (slightly edited for clarity):
function loadServerReference<T>(
response: Response,
id: ServerReferenceId,
bound: null | Thenable<Array<any>>,
parentChunk: SomeChunk<T>,
parentObject: Object,
key: string,
): T {
const serverReference: ServerReference<T> =
resolveServerReference(response._bundlerConfig, id);
const preloadPromise = preloadModule(serverReference);
let promise: Promise<T>;
if (bound) {
promise = Promise.all([(bound: any), preloadPromise]).then(
// ----------------v
([args]: Array<any>) => bindArgs(requireModule(serverReference), args),
);
} else {
if (preloadPromise) {
promise = Promise.resolve(preloadPromise).then(() =>
// v-------------
requireModule(serverReference),
);
} else {
// --------v
return requireModule(serverReference);
}
}
And how does requireModule look?
export function requireModule<T>(metadata: ClientReference<T>): T {
let moduleExports;
// We assume that preloadModule has been called before, which
// should have added something to the module cache.
const promise: any = asyncModuleCache.get(metadata.specifier);
if (promise.status === 'fulfilled') {
moduleExports = promise.value;
} else {
throw promise.reason;
}
if (metadata.name === '*') {
// This is a placeholder value that represents that the caller imported this
// as a CommonJS module as is.
return moduleExports;
}
if (metadata.name === '') {
// This is a placeholder value that represents that the caller accessed the
// default property of this if it was an ESM interop module.
return moduleExports.default;
}
return moduleExports[metadata.name];
}
Very interesting. If we control metadata.name and metadata.specifier, we can access whatever property we want. Slam dunk, right?
While tempting, we were led astray, so were Datadog’s researchers, and so were other researchers we talked to. We were in good company!
There are multiple problems with this gadget, but the most important one: It’s hard to compose. To pull it off, we have to be able to:
- Store the result somewhere
- Invoke this gadget again on the result: To go from
o[prop1]too[prop1][prop2] - Store the result again
- Invoke the result with a user-controlled parameter to create a function with a payload
- Invoke this result to run the function we created
Because requireModule works on references beyond our control, the step from 1 → 2 is hard: It means at the very least we have to find another traversal.
Alternatively, if requireModule was able to require any module that you want…well, that’s tempting. We could require('vm').runInNewContext('payload') . But then…then the patch doesn’t fix that. runInNewContext is an own property of vm:
> require('vm').hasOwnProperty('runInNewContext')
true
So the patch wouldn’t have actually fixed anything. This direction was very tempting, and very wrong.
It took a while to zoom out of that and understand that the : lookup in createModelResolver is the lookup gadget that was patched:
.webp)
The AI Slop
Researchers weren’t the only ones led astray - LLMs got it wrong too. A few hours after the patch was published, we started seeing repos containing exploit payloads like:
{
'$ACTION_REF_0': '',
'$ACTION_0:0': JSON.stringify({
id: 'vm#runInThisContext',
bound: ['console.log(4)']
})
}
The payload looks like a good ol’ fashioned deserialization payload, with weird punctuation and everything, but something didn’t feel right. It didn’t match our understanding of the patched code. It looked too simple for what was otherwise complex machinery. Were we so off track in our research?
Running it against local Next apps did not result in an RCE - it resulted in a lot of nothing. Then how was it supposed to work? Well, the exploits included example servers. Here’s a redacted example:
// 1: Load react internals
const bundledPath = path.join(__dirname, '../node_modules/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js');
const moduleCode = fs.readFileSync(bundledPath, 'utf8');
const moduleExports = {};
const moduleWrapper = new Function('exports', 'require', '__dirname', '__filename', moduleCode);
moduleWrapper(moduleExports, require, path.dirname(bundledPath), bundledPath);
// 2: Mock a Next server manifest
const serverManifest = {
'vm': {
id: 'vm',
name: 'runInThisContext',
chunks: []
},
};
const server = http.createServer(async (req, res) => {
const chunks = [];
req.on('data', chunk => chunks.push(chunk));
req.on('end', async () => {
const formData = parseMultipart(chunks.join(''));
// 3: Call internal function
const actionFn = await moduleExports.decodeAction(formData, serverManifest);
await actionFn();
});
});
server.listen(3000 || process.env.PORT);
Confused? So were we. Here's what this so-called exploit (and a dozen of its copycats) is doing:
- Load a specific file from React's internals. This is one of the files that were patched between the vulnerable and non-vulnerable versions
- Mock a Next server manifest object
- Calls the internal function of the internal React module with our mocked value
- …forcefully call the result
(Remember how we needed a “call this function” gadget before? How fun is it to create ones of your own?)
There are many ways of describing these “exploits”. “Wrong” is the kindest. This is not a faithful reproduction of a vulnerable flow. The most crucial part to understand is the server manifest. It is a build-time artifact of Next containing information of which actions the server supports and how to load them. On an example application that we've built:
{
"node": {
"4098de70b66c775dae2f14c33bb1e0aae8ef70783d": {
"workers": {
"app/page": {
"moduleId": "[project]/.next-internal/server/app/page/actions.js { ... }",
"async": false,
"exportedName": "clickHandler",
"filename": "app/actions.tsx"
}
},
"layer": {
"app/page": "action-browser"
},
"exportedName": "clickHandler",
"filename": "app/actions.tsx"
}
}
}
To run an action, the client sends the id of the module it wants to call a function on, and then the function name. The server validates the existence of this id, aborting the request if none exists.
Without an ability to inject additional elements into the server manifest, there is no legitimate reason for the manifest to include builtin modules like vm which can be accessed to execute code. Looking at the code generating the manifest, the ids are hashes of a salt, file path, and the export name - not plain strings like vm. This makes it very unlikely to be able to guess the module id.
This is an intentionally vulnerable server using internal functions in a manner that doesn't resemble the real world. Have you ever used an LLM (especially in an agentic editor like Cursor, Amp, or Windsurf) and it did everything in its power to make things work, including commenting out tests and assigning constant outputs? This was exactly it.
After the Deep Dive Roundup: "AI Slop" and Bad PoCs
Because the patch was so effective at hiding the real issue, the void was filled with misinformation.
- Fake Panics: A bunch of sloppy exploits and PoCs were published that relied on mocking internal details of Next.js or React. These often showcased fake
vmandfspanics that weren't reproducible in real environments. - The
$FMyth: The$Fprimitive is not involved in the actual exploit. It relies on a whitelist of compile-time functions. To exploit this, you would need to know difficult-to-guess function IDs of previously vulnerable server-side functions. - "All Servers are Vulnerable": Not true.
The Facts: The Real Exploit Path
So, if $F is the red herring, what is the real weapon?
- The Trigger: The exploit occurs when attempting to run a React Server Function. In Next.js, this is encoded in the
next-actionheader. - The Payload: The payload must be
multipart/form-data. - The Operators: The minimal operators required are
$@(Reference),$B(Blob), and$n(Number). - The Scope: You are vulnerable if your web server handles React Server Components.
- Crucially: You do not need to expose React Server Functions.
- The function name in the header doesn't matter; it doesn't need to exist or be resolved. The vulnerability triggers before that validation completes.
- Safety Check: Standard SSR (Server-Side Rendering) with React is okay and not vulnerable.
Why This Matters
React2Shell is more than a complex vulnerability. It exposed a broader operational risk that now affects every security team: the tendency to rely on AI-generated analysis and rapid community chatter during high-pressure incidents. That speed is tempting, but it also creates blind spots. In this case, AI-driven interpretations and rushed PoCs pushed the industry toward the wrong surfaces while the real exploit path remained hidden in the patch.
For security leaders, this is a reminder that modern incident response requires discipline. You need the ability to separate AI-assisted speculation from verified behavior, and you need processes that prevent well-intentioned teams from acting on conclusions that have not been validated. When the narrative is shaped by AI slop and incomplete analysis, time gets wasted, resources get misallocated, and teams risk missing the real exposure window.
It also matters for the next generation of researchers. AI lowers the barrier to entry and can help people move faster, but it becomes a liability when treated as an authority. Real expertise comes from reading patches, tracing execution paths, building mental models, and challenging your own assumptions. AI can support that work, but it cannot replace it. React2Shell made that painfully clear.
For verified IOCs and remediation rules, visit our live resource page: https://react2shell.miggo.io


.webp)
