I think this will be helpful why because our application has more of sync processes with other systems, but tests are independent and grouped into groups, So default mode would help in this situations. Thanks for this video!
Thanks. :) Not sure I agree though. There are better tools than serial mode for debugging (UI mode, debug mode and traces) and there are scenarios when tests require to run sequentially in CI/CD. If the tests (for whatever reason) rely on each other or conflict with each other with shared state, then CI/CD might need to run in "serial" or "default" mode.
There is always something new to learn in your videos. Thank you for all this. I have a problem though which I faced on almost all the projects I worked on and that is handling login based scenarios in parallel tests. I have resolved this issue by binding the login users with worked index but for this I had to make couple of users for each worker first. What would be your recommendation to handle this?
I'm actually doing the same in one of my projects for parallel CI/CD runs using existing data. The worker id connects to a certain predefined user account (1 to 4 in my case). Unfortunately, I didn't come across a better solution yet, tbh. But I think it's alright. 🫣
Nice video as always, thanks ! Is there a way to limit this in the same way for different projects ? For example : I have 3 tests (A, B & C) that I want to be executed on 3 different browsers with the same dataset/account, could I run test A on the first browser, and execute it on the second only when the former concluded, and so on ?
@@ChecklyHQ Let me rephrase it : I would like to avoid some tests to run on multiple browsers (= 1 project for each) at the same time. So even if workers are available, it would wait for the same test on the previous browser to finish
It's very hard to discuss this topic without looking at specifics. But as an easy way, you could always run tests using the `--project` flag or some grep pattern. Then you'd be in full control of what projects / tests run when. :)
I'm running Playwright tests on Desktop, Mobile, and Tablet. While all the test cases pass locally, I notice a higher number of skipped test cases in CI/CD, particularly on Mobile and Tablet, even after adjusting the retries and maxFailures settings in the config file. Desktop works better compared to the other two. What might be causing this inconsistency, and how can I address it?
I can't tell what's causing your issues but to find out you should turn on traces (playwright.dev/docs/trace-viewer-intro#recording-a-trace) and inspect what's happening in your CI/CD environment. :) For me, it's a different configuration or network condition most of the time. Good luck!
Again great content as always! Funny thing, last days I worked on this topic unintentionally. Maybe you can help me out with my current problem. I have sequential tests and I want to add an afterAll, but unfortunately the afterAll does not support Page fixture. I use page fixture like you described in your previous videos using Fixtures for POM. Do you have an idea?
@ i cant find a simple tutorial how can i share to a slack channel my test results each test run… all I know atm is i would need to use slack api and slack bot but idk where to start :/
I think this will be helpful why because our application has more of sync processes with other systems, but tests are independent and grouped into groups, So default mode would help in this situations.
Thanks for this video!
Happy to hear it's been valuable. 🦝 💙
Great video! Running Playwright tests in serial mode is ideal for debugging, not for CI/CD phases.
Thanks. :) Not sure I agree though. There are better tools than serial mode for debugging (UI mode, debug mode and traces) and there are scenarios when tests require to run sequentially in CI/CD.
If the tests (for whatever reason) rely on each other or conflict with each other with shared state, then CI/CD might need to run in "serial" or "default" mode.
@ChecklyHQ thanks for the clarification. 👍
Happy to. :)
There is always something new to learn in your videos. Thank you for all this. I have a problem though which I faced on almost all the projects I worked on and that is handling login based scenarios in parallel tests. I have resolved this issue by binding the login users with worked index but for this I had to make couple of users for each worker first. What would be your recommendation to handle this?
I'm actually doing the same in one of my projects for parallel CI/CD runs using existing data. The worker id connects to a certain predefined user account (1 to 4 in my case). Unfortunately, I didn't come across a better solution yet, tbh. But I think it's alright. 🫣
@@ChecklyHQ Thanks.
Nice video as always, thanks !
Is there a way to limit this in the same way for different projects ?
For example : I have 3 tests (A, B & C) that I want to be executed on 3 different browsers with the same dataset/account, could I run test A on the first browser, and execute it on the second only when the former concluded, and so on ?
Thanks! I'm not sure I understand your question entirely. What tests should run in which browser when? 😅
@@ChecklyHQ Let me rephrase it :
I would like to avoid some tests to run on multiple browsers (= 1 project for each) at the same time.
So even if workers are available, it would wait for the same test on the previous browser to finish
It's very hard to discuss this topic without looking at specifics. But as an easy way, you could always run tests using the `--project` flag or some grep pattern. Then you'd be in full control of what projects / tests run when. :)
I'm running Playwright tests on Desktop, Mobile, and Tablet. While all the test cases pass locally, I notice a higher number of skipped test cases in CI/CD, particularly on Mobile and Tablet, even after adjusting the retries and maxFailures settings in the config file. Desktop works better compared to the other two. What might be causing this inconsistency, and how can I address it?
I can't tell what's causing your issues but to find out you should turn on traces (playwright.dev/docs/trace-viewer-intro#recording-a-trace) and inspect what's happening in your CI/CD environment. :)
For me, it's a different configuration or network condition most of the time. Good luck!
Again great content as always! Funny thing, last days I worked on this topic unintentionally.
Maybe you can help me out with my current problem. I have sequential tests and I want to add an afterAll, but unfortunately the afterAll does not support Page fixture. I use page fixture like you described in your previous videos using Fixtures for POM. Do you have an idea?
Thank you.
May I ask what's your case for `page` in an after all hook? (Oc, I might look into it for another video 🫣)
Deletion/cleanup from UI.
Can you please do tutorial about Slack integration for Playwright reporting of my test results?😢 been stucked for months now
Could you describe a bit more, what you're stuck at?
@ i cant find a simple tutorial how can i share to a slack channel my test results each test run… all I know atm is i would need to use slack api and slack bot but idk where to start :/
Gotcha. I'll put it on the list. :)
@ thank you so much😭 looking forward to it!
I think it depends on your CI tool. I am using Github actions plugin, for other tools would be a bit different but similar