This summer, I had the opportunity to work on the Support Team at Broadcom. In my first
week, I met the entire team and quickly began shadowing Support Engineers to learn what their
daily responsibilities looked like. They introduced me to a web-based tool called Wolken, which
they use to manage support cases, customer information, and case statuses.
While observing how they completed and closed cases, I noticed some inefficiencies in their
workflow. Although they used Wolken to communicate with customers, they often had to leave
the platform to search for the information they needed to resolve the case.
To help streamline their workflow and reduce resolution time, I decided to build a tool that
could enhance their efficiency. Since I didn’t have access to Wolken’s source code, I began
developing a Chrome Extension that could overlay functionality directly within the Wolken
interface.
Getting Started: Input, Planning, & Development
Before jumping into development, I spoke with the team to identify features they felt were
missing in Wolken. Based on our discussions, I compiled a list of potential enhancements:
● Audio/visual alerts for new cases
● Automated queue refreshing
● PII (Personally Identifiable Information) detection
● AI-driven case insights
With the requirements defined, I began researching how Chrome Extensions work and kicked
off development during my first week.
The project was organized into two main services:
├── backend/
├── extension-ui/
├── docs/
├── docker-compose.yml
├── Jenkinsfile
extension-ui houses the code for the downloadable Chrome Extension, and
backend contains a Flask server that handles network requests from the extension.
We used Docker to ensure the application could run consistently across different environments
and Jenkins for automated testing and deployment of Docker images to Artifactory.
During development, I wrote several JavaScript modules to power the UI overlay
inside Wolken. The interface dynamically updates as the user navigates through different tabs in Wolken.
One core feature is the AI Assistant, which sends relevant case context to a large language model (LLM) to generate accurate responses. To extract context, the extension taps into the page’s DOM and Wolken’s network data.
Backend Development: LLM Integration & Expected Impact
To integrate the LLM, I needed to solve a mixed-content issue. The model we were using—Qwen
2.5b—was only accessible via HTTP, while Wolken runs on an HTTPS domain. This mismatch
prevented the browser from making direct requests.
The solution was to build a Flask-based HTTPS proxy server. Here’s how the flow works:
1. The extension captures the user’s query and sends it to the proxy server.
2. The proxy forwards the request to the LLM.
3. The LLM returns a response.
4. The proxy sends the response back to the extension, where it's displayed in the UI.
This architecture enables seamless AI-powered assistance directly within Wolken.
As for the pre-release, the Chrome Extension is set to launch the week of July 28, and I’m
excited to see it in action. I believe it will meaningfully improve the engineers' workflow. For
example:
● PII Detection helps reduce manual review of documentation.
● AI Assistant functionality allows engineers to find answers faster and spend more time
solving problems.
Recap & Reflection
As my summer at Broadcom wraps up, I’m incredibly thankful for the opportunity to work here.
My experience has been nothing but positive. Everyone on the Support Team was kind, helpful,
and knowledgeable. I especially appreciated the guidance from the full-time engineers, whose
expertise in internal systems made development and integration a smooth process.
In conclusion, I’ve learned a ton, grown as a developer, and had a blast building something
useful. I’m proud of the work I’ve done—and grateful to my team for making this such a
rewarding experience.