Loading request...
Request to standardize and abstract prompt/context caching, currently supported by Gemini and Claude with differing syntax. The user suggests a general-purpose `cache_options` to cater to various LLM providers, aligning with LangChain's role as an abstraction layer.
Hi, I noticed that Prompt/Context Caching is supported by both Gemini and Claude but both of the syntax is different Is it possible to abstract away this and allow a general purpose `cache_options` that caters to different LLM providers? LangChain is supposed to be the abstraction that performs all of this, and I was hoping this can be implemented FYI - @lkuligin @baskaryan @efriis