Request to introduce a new v17::TopK operation with a deterministic and configurable NaN handling enum. This addresses the issue of implementation-defined NaN ordering in current TopK operations, which varies across frontend frameworks like NumPy and PyTorch.
### Request Description ### 1. Motivation & Model Examples As discussed in PR #33633 with @mitruska and @nshchego, the current `v11::TopK` and earlier operations have implementation-defined NaN ordering behavior. Because different frontend frameworks handle NaNs differently (e.g., NumPy treats them as smallest, PyTorch treats them as largest), there is a need for a deterministic, configurable approach to NaN handling in OpenVINO. **Model Examples Benefiting from this:** * **Multimodal AI Models (CLIP, Vision Transformers):** Embeddings can occasionally produce `NaN` values due to numerical instabilities in FP16/BF16 projections. If `TopK` propagates these NaNs unpredictably, it corrupts downstream similarity searches. * **RAG (Retrieval-Augmented Generation) Pipelines:** When retrieving the top `K` relevant document chunks, a single rogue `NaN` similarity score can currently push valid, highly-relevant documents out of the TopK results, breaking the retrieval chain entirely. ##