Request for ChatOpenAI to preserve and resend the reasoning_content field for OpenAI-compatible backends like vLLM, to maintain multi-turn tool-calling flows.
### Issue When using `langchain_openai.ChatOpenAI` against an OpenAI-compatible backend such as vLLM, some reasoning models return a non-standard assistant field named `reasoning_content`. `ChatOpenAI` currently does not preserve this field on the incoming assistant message and does not resend it on subsequent turns, which breaks multi-turn tool-calling flows for models/backends that expect the field to round-trip. ### Current behavior - Turn 1 returns an assistant message with both a tool call and `reasoning_content`. - LangChain parses the tool call, but `reasoning_content` is not surfaced in a reusable way by default. - On Turn 2, after appending the tool result and invoking the model again, the earlier `reasoning_content` is not included in the outgoing assistant message payload. - For some vLLM-served reasoning models, this leads to degraded or empty final responses in the tool-result summarization step. ### Expected behavior If an OpenAI-compatible backend returns extra assista