Summary of STAR East Conference – Why It’s Worth Attending a Conference in the US
Recently, I had the opportunity to give a presentation at one of the largest software testing conferences in the US – STAR East. This annual event, held in Orlando, Florida, is one of the biggest conferences on the East Coast. There’s also STAR West on the West Coast, and starting this year, STAR Canada is returning to Canada.
Last year, I attended STAR East as a participant, and this year I had the pleasure of joining as a speaker. Thanks to Sii Poland for supporting me.
STAR East has been organized in Orlando for many years and consistently attracts professionals from around the world. The organizers also run STAR West on the West Coast, and in 2025, they’re bringing STAR Canada back to the agenda.
Attending the event last year as an attendee was an amazing experience, but being invited to speak this year was truly special. The conference offered many inspiring talks and thought-provoking discussions with experienced professionals.
What stood out to me the most was the openness of both speakers and attendees. It was easy to talk about real challenges, share ideas, and build meaningful connections.
The Trip
My trip started with a three-hour delay at the airport. Fortunately, I still managed to arrive on time at Frankfurt Airport, where I had a connecting flight directly to Orlando. Luckily, that flight went according to plan. In hindsight, flying out earlier was definitely a smart decision—it helped me avoid potential issues, even if it meant spending extra time at the airport.
Before the Conference
Before the conference, I took a few days off to visit Tampa – one of the most interesting cities I’ve had the chance to explore in the US. I stayed in a hotel in the historic district of Ybor City, an area famous for its cigar-making heritage. Even today, the neighborhood retains its vibrant and unique character. One of its most charming quirks? Chickens roaming freely through the streets – a well-known and beloved feature of Ybor City.
The Conference
The conference attracted a large crowd—while I don’t have the exact numbers, I’d estimate there were around 500 to 600 attendees. It’s a major event, on par with some of the biggest testing conferences in Europe, such as TestCon or TestWarez.
A Few Presentations That Stood Out to Me
At events like this, it’s always a challenge to attend all the sessions you’re interested in—multiple tracks often run in parallel, making it impossible to be everywhere at once. It can be a bit frustrating, but thankfully, many of the presentations were recorded, allowing attendees to catch up later.
Here are a few thoughts on the sessions that stood out to me:
Keynote – Dona Sarkar: Oops, AI Did It Again: How to Get AI and Agents to Stop Being Weird and Actually Be Useful in Your Business
Dona’s charisma and stage presence were truly impressive. In her talk, she delivered a powerful and reassuring message: although AI is advancing rapidly, we don’t need to fear it—especially when it comes to concerns about job loss. She used historical analogies, such as the rise of the internet and cloud computing, to highlight that anxiety around emerging technologies is nothing new.
Her key takeaway was that we should at least start using AI, if only to stay relevant within the tech community. Ultimately, it’s our experience and critical thinking that shape how effectively we apply these tools. I couldn’t agree more.
Keynote – Ronit Bohrer Hillel: Redefining Quality – Empowering Autonomous Teams for SaaS Excellence
This keynote was delivered on the second day by Ronit Bohrer Hillel and also focused on the topic of AI. Ronit discussed how AI can enhance automation—not just in testing, but across entire development processes.
However, she emphasized a critical point: human involvement remains essential. Despite the rapid advancements in AI, people continue to play a vital role in ensuring quality, context, and informed decision-making throughout the software lifecycle.
Tariq Knight i Donny Santiago
In their joint presentation, Dona and Dan walked us through the evolution of AI from 2008 to the present. They emphasized that many concepts we now consider groundbreaking have actually been around for over two decades. The session stood out thanks to its engaging format and live demonstrations.
Dona showcased tools like LangChain and LangGraph, both used to build basic (or more advanced) AI agents. Dan demonstrated how to create a simple AI agent connected to a basic database, integrating it with OpenAI tools.
The recurring theme throughout their talk was clear: while AI has made remarkable progress, human involvement remains essential. As an industry, we’re still in the early stages of figuring out how to meaningfully integrate AI into our daily workflows.
Dmitriy Gumeniuk „Smarter Tools, Unstoppable Testers: ML-Powered Insights for Automated Testing Reporting”
As always, Dima delivered a clear and insightful presentation. He discussed the challenges of developing ReportPortal—a test automation dashboard—with a focus on building algorithms that identify patterns and similarities in error logs and stack traces.
One of ReportPortal’s standout features is its ability to automatically group test failures. Typically, when a test fails, we classify the issue manually (e.g., environment issue, test bug, product bug). ReportPortal learns from these classifications and, based on a configurable similarity threshold, can automatically assign future failures to the appropriate category—such as a known automation bug.
Dima also shared exciting ideas for future enhancements, including expanding the analysis beyond text logs to incorporate screenshots for richer context. He also introduced a quality gate feature, which alerts teams when test quality drops below a set threshold—a valuable capability for maintaining high standards in test automation.
Andrew Knight „Scaling Automated Tests to Infinity and Beyond”
In his presentation, Andrew shared several key strategies for scaling automated test execution. He began by emphasizing the importance of parallelism and designing independent tests. Another challenge he highlighted—often underestimated by teams—is the need to properly configure waits for specific elements to ensure stability and speed.
Personally, I believe the topic of test automation scalability doesn’t get the attention it deserves. Many organizations focus heavily on increasing the number of automated tests but neglect the infrastructure needed to run those tests efficiently.
I recall a project where we developed around 150 hybrid tests (UI + API) using Playwright. We executed them on a machine with just four virtual cores—and still completed the run in about 8 minutes. With more powerful hardware, that time could have been reduced even further, all thanks to effective parallel execution.
When you leverage parallelism properly, running your full test suite not just daily—but even several times per hour—becomes a realistic goal.
Panel – Revolutionizing Test Automation: The Impact of Generative AI on Quality Engineering”
This panel brought together several experts, including Dona Sarkar, Tariq King, Melissa Benua, and Philip Daye, and Adam Auerbach who was the host. What stood out to me was the range of perspectives — from highly enthusiastic to more cautious.
Some panelists were excited about how generative AI and tools like Vibe Coding could significantly enhance test automation. For example, AI can help define testing scopes or even generate automation scripts. Others emphasized the need to be aware of security concerns, such as the differences between public and private models, and the importance of training your own models or running large language models (LLMs) locally.
One particularly interesting idea was to build simple, AI-based tools — for instance, a tool that detects accessibility issues in your code. Many of the panel’s insights came back to a core message: experiment with AI and find practical, meaningful use cases.
Another important point raised was the need for proper licensing when using AI tools in enterprise environments. Companies should provide permissions and ensure that sensitive data is not fed into models by accident.
The session closed with a powerful reminder: the usefulness of AI depends largely on our own creativity.
This panel explored a wide range of topics in test automation — from low-code tools to traditional code-based approaches. It featured several industry experts, including Andrew Knight, Joe Colino, Janna Loeffler, Joe Colantonio, and Srinivas Rao Labhani, with Adam Auerbach as the moderator.
A recurring theme was the growing role of AI in test automation. The panelists shared different perspectives on how AI tools can support testing workflows. From my point of view, developing strong technical skills remains essential to use these tools effectively.
My Journey as a Speaker
I’ve been speaking at conferences for many years, and I’ve had the privilege of appearing at nearly all major testing events in Poland. My international speaking adventure began two years ago, starting with QA Global Summit, then Mabl Experience in Boston, and later TestCon 2024 — where my talk was selected as one of the five best presentations.
This year, I’ll be speaking at TestCon again. I highly recommend this event — the cinema hall setup is brilliant, the screen clarity is top-notch, and the chairs are incredibly comfortable!
🚀 My STAREast 2024 Talk: Playwright Tips & Tricks: Why Does Microsoft’s Framework Perform So Well for Testing?
In my STAREast talk, I covered several practical and advanced aspects of working with Playwright.
I began by showcasing lesser-known features that help boost efficiency, such as:
- auto-wait behavior (and how it actually works under the hood),
- the toPass() method, which waits for custom conditions within a defined timeout,
- and waitForURL() — a powerful method that can wait for a specific resource using partial URLs or even regex.
Next, I moved on to a hybrid approach — combining API and UI testing. I walked the audience through a real-world example using the Trello API:
- Creating boards, lists, and cards programmatically,
- Then interacting with them via the UI, such as dragging cards between columns.
I also shared available Playwright integrations:
- Lighthouse (I’ve written a full article about this — link [here]),
- Visual Regression Testing, and
- A custom framework I built in Playwright, which I detailed on the Sii Poland blog. [Part 1 is live, and part 2 is coming soon.]
It was amazing to present to a packed room and share my knowledge on an international stage.
I had 86 people in the audience during my presentation. I received a fantastic score for my presentation – 9.75 out of 10! I’m so pleased with the feedback I got. It really made my day.
🧠 Behind the Scenes: My Presentation Prep
Preparing for an international talk takes effort — especially in a non-native language. To improve my English delivery, I had several sessions with teachers via platforms like Preply and Fluentbe — both have great instructors.
I also tested a new app called GetPronounce.com, which helped me refine pronunciation by recording my speech and suggesting more natural phrases and vocabulary. It turned out to be surprisingly helpful!
Final Thoughts
The trip to the US was a great experience. Many of the topics I presented at STAREast are being expanded further on my blog and the Sii Poland blog — including the upcoming second part of the Playwright framework series. It’s a longer post, but I hope it will help anyone interested in building their own automation frameworks with Playwright.
If you’d like to dive deeper into Playwright, feel free to reach out — I also offer consultations and workshops on this topic.
Call to Action
Let’s connect! Whether you’re curious about Playwright, test architecture, or AI in automation — I’m happy to share insights, collaborate, or speak at your next event.