A user is facing challenges in tracking user feedback across multiple channels when testing new UI interactions. They suggest a need for a tool that centralizes feedback collection to identify patterns more effectively.
Quick question for product teams here. When we test UI changes (animations, flows, micro interactions, etc.), the feedback ends up scattered everywhere: (a)Slack messages (b)support tickets (c)app reviews (d)survey responses (e)customer interviews For example, we were testing a few animation concepts created with Jitter, and suddenly we had dozens of comments across different channels. The hardest part wasn’t collecting feedback ,it was understanding patterns. A few loud users would complain about something minor, while a quieter usability issue appeared repeatedly but wasn’t obvious at first. We eventually started grouping similar feedback to track how often certain issues appeared over time. That made roadmap discussions way more objective. Curious how other teams handle this stage. Do you manually tag feedback or use some kind of analysis workflow?