The user wants the ability to modify DLSS-RR's internal implementation to fix bugs (e.g., masking portions for denoising, dynamic resolution scaling issues) and extend its capabilities for features like dynamic foveated rendering. This suggests a need for more open access or an SDK for customization.
At GPC, one talk kept coming back to the same question: What happens to graphics programmers when AI hardware becomes standard in every GPU? Every major GPU company is shipping AI accelerators. Servers, phones, consoles, next-gen gaming rigs. The entire industry is betting on ML hardware. And in my opinion, not many programmers are experimenting enough with it. We take pride in our mathematical approximations. Analytical, math-heavy lighting equations. I don't like to predict the future, but I think the shift is starting to happen. Look at the papers. NVIDIA, AMD, Intel, and us at @Traverse Research. Everyone's money is going toward AI-related techniques and middleware. It's a different way of thinking. It doesn't come naturally. So programmers ignore it. But I think this is a career-limiting move. This is where I would start these days: - Learn MLPs for graphics approximation - Study how matrix units work beyond typical AI use cases - Read the actual research papers companies are publishing The specialization is already here. You can dedicate an entire career to just Vulkan now. š š š¾šš²ššš¶š¼š» š³š¼šæ š“šæš®š½šµš¶š°š š½šæš¼š“šæš®šŗšŗš²šæš: Are you getting prepared for the rendering techniques tomorrow needs? šš¼šæ šš²š»š¶š¼šæš š¶š» šŗš š¶š»š±ššššæš: Where would you recommend someone to get started?