Automated Visual Testing | Applitools https://applitools.com/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 30 Nov 2023 21:31:36 +0000 en-US hourly 1 Should We Fear AI in Test Automation? https://applitools.com/blog/should-we-fear-ai-in-test-automation/ Mon, 04 Dec 2023 13:39:00 +0000 https://applitools.com/?p=53216 Richard Bradshaw explores fears around the use of AI in test automation shared during his session—The Fear Factor—at Future of Testing.

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>

At the recent Future of Testing: AI in Automation event hosted by Applitools, I ran a session called ‘The Fear Factor’ where we safely and openly discussed some of our fears around the use of AI in test automation. At this event, we heard from many thought leaders and experts in this domain who shared their experiences and visions for the future. AI in test automation is already here, and its presence in test automation tooling will only increase in the very near future, but should we fear it or embrace it?

During my session, I asked the attendees three questions:

  • Do you have any fears about the use of AI in testing?
  • In one word, describe your feelings when you think about AI and testing.
  • If you do have fears about the use of AI in testing, describe them.

Do you have any fears about the use of AI in testing?

Where do you sit?

I’m in the Yes camp, and let me try to explain why.

Fear can mean many things, but one of them is the threat of harm. It’s that which concerns me in the software testing space. But that harm will only happen if teams/companies believe that AI alone can do a good enough job. If we start to see companies blindly trusting AI tools for all their testing efforts, I believe we’ll see many critical issues in production. It’s not that I don’t believe AI is capable of doing great testing—it’s more the fact that many testers struggle to explain their testing, so to have good enough data to train such a model feels distant to me. Of course, not all testing is equal, and I fully expect to see many AI-based tools doing some of the low-hanging fruit testing for us.

In one word, describe your feelings when you think about AI and testing.

It’s hard to disagree with the results from this question—if I were to pick two myself, I would have gone with ‘excited and skeptical.’ I’m excited because we seem to be seeing new developments and tools each week. On top of that, though, we are starting to see developments in tooling using AI outside of the traditional automation space, and that really pleases me. Combine that with the developments we are seeing in the automation space, such as autonomous testing, and the future tooling for testing looks rather exciting.

That said, though, I’m a tester, so I’m skeptical of most things. I’ve seen several testing tools now that are making some big promises around the use of AI, and unfortunately, several that are talking about replacing or needing fewer testers. I’m very skeptical of such claims. If we pause and look across the whole of the technology industry, the most impactful use of AI thus far is in assisting people. Various GPTs help generate all sorts of artifacts, such as code, copy, and images. Sometimes, it’s good enough, but the majority of the time is helping a human be more efficient—this use of AI and such messaging, excites me.

If you do have fears about the use of AI in testing, describe them here.

We got lots of responses to this question, but I’m going to summarise and elaborate on four of them:

  • Job security
  • Learning curve
  • Reliability & security
  • How it looks

Job Security

Several attendees shared they were concerned about AI replacing their jobs. Personally, I can’t see this happening. We had the same concern with test automation, and that never really materialized. Those automated tests don’t maintain themselves, or write themselves, or share the results themselves. The direction shared by Angie Jones in her talk Where Is My Flying Car?! Test Automation in the Space Age, and Tariq King in his talk, Automating Quality: A Vision Beyond AI for Testing, is AI that assists the human, giving them superpowers. That’s the future I hope, and believe we’ll see, where we are able to do our testing a lot more efficiently by having AI assist us. Hopefully, this means we can release even quicker, with higher quality for our customers.

Another concern shared was about skills that we’ve spent years and a lot of effort learning, suddenly being replaced by AI. Or significantly easier with AI. I think this is a valid concern but also inevitable. We’ve already seen AI have a significant benefit to developers with tools like GitHub Copilot. However, I’ve got a lot of experience with Copilot, and it only really helps when you know what to ask for—this is the same with GPTs. Therefore, I think the core skills of a tester will be crucial, and I can’t see AI replacing those.

Learning Curve

If we are going to be adding all these fantastic AI tools into our tool belts, I feel it’s going to be important we all have a basic understanding of AI. This concern was shared by the attendees. For me, if I’m going to be trusting a tool to do testing for me or generating test artefacts for me, I definitely want that basic understanding. So, that poses the question, where are we going to get this knowledge from?

On the flip side of this, what if we become over-reliant on these new AI tools? A concern shared by attendees was that the next generation of testers might not have some of the core skills we consider important today. Testers are known for being excellent thinkers and practitioners of critical thinking. If the AI tools are doing all this thinking for us, we run the risk of those skills losing their focus and no longer being taught. This could lead to us being over-reliant on such tools, but also the tools biassing the testing that we do. But given that the community is focusing on this already, I feel it’s something we can plan to mitigate and ensure this doesn’t happen.

Reliability & Security

Data, data, data. A lot of fears were shared over the use and collection of data. The majority of us work on applications where data, security, and integrity are critical. I absolutely share this concern. I’m no AI expert, but the best AI tools I’ve used thus far are ones that are contextual to my domain/application, and to do that, we need to train it on our data. These could lead to data bleeding and private data, and that is a huge challenge I think the AI space has yet to solve.

One of the huge benefits of AI tooling is that it’s always learning and, hopefully, improving. But that brings a new challenge to testing. Usually, when we create an automated test, we are codifying knowledge and behavior, to create something that is deterministic, we want it to do the same thing over and over again. This provides consistent feedback. However, with an AI-based tool it won’t always do the same thing over and over again—it will try and apply its intelligence, and here’s where the reliability issues come in. What it tested last week may not be the same this week, but it may give us the same indicator. This, for me, emphasizes the importance of basic AI knowledge but also that we use these tools as an assistant to our human skills and judgment.

How It Looks

Several attendees shared concerns about how these AI tools are going to look. Are they going to a completely black box, where we enter a URL or upload an app and just click Go? Then the tool will tell us pass or fail, or perhaps it will just go and log the bugs for us. I don’t think so. As per Angie’s and Tariq’s talk I mentioned before, I think it’s more likely these tools will focus on assistance. 

These tools will be incredibly powerful and capable of doing a lot of testing very quickly. However, what they’ll struggle to do is to put all the information they find into context. That’s why I like the idea of assistance, a bunch of AI robots going off and collecting information for me. It’s then up to me to process all that information and put it into the context of the product. The best AI tool is going to be the one that makes it as easy as possible to process the masses of information these tools are going to return.

Imagine you point an AI bot at your website, and within minutes, it’s reporting accessibility issues to you, performance issues, broken links, broken buttons, layout issues, and much more. It’s going to be imperative that we can process that information as quickly as possible to ensure these tools continue to support us and don’t drown us in information.

Visit the Future of Testing: AI in Automation archive

In summary, AI is here, and more is coming. It’s very exciting times in the software testing tooling space, and I’m really looking forward to playing with more new tools. I think we need to be curious with these new tools, try them, and see what sticks. The more tools we have in our tool belts, the more options we have to solve our ever-increasing complex testing challenges. 

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: AI in Automation Recap https://applitools.com/blog/future-of-testing-ai-in-automation-recap/ Tue, 28 Nov 2023 13:13:00 +0000 https://applitools.com/?p=53155 Recap of the Future of Testing: AI in Automation conference. Watch the on-demand sessions to learn actionable steps to implement AI in your software testing strategy, key considerations around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>

The latest edition of the Future of Testing events, held on November 7, 2023, was nothing short of inspiring and thought-provoking! Focused on AI in Automation, attendees learned how to leverage AI in software testing with top industry leaders like Angie Jones, Tariq King, Simon Stewart, and many more. All of the sessions are available now on-demand, and below, we take a look back at these groundbreaking sessions to give you a sneak peek of what to expect before you watch.

Opening Remarks

Joe Colantonio from TestGuild and Dave Piacente from Applitools set the stage for a thought-provoking discussion on reimagining test automation with AI. As technology continues to evolve at a rapid pace, it’s important for software testing professionals to adapt and embrace new tools and techniques. Joe and Dave encouraged attendees to explore the potential of AI in test automation and how it can enhance their current processes. They also touch upon the challenges faced by traditional test automation methods and how AI-powered solutions can help overcome them.

Dave shared one of our latest updates – the integration of Applitools Eyes with Preflight! Learn more about Preflight.

Keynote—Reimagining Test Automation with AI by Anand Bagmar

In this opening session, Anand Bagmar explored how to reimagine your test automation strategies with AI at each stage of the test automation life cycle, including a live demo showcasing the power of AI in test automation with Applitools.

Anand first introduced the transition from Waterfall to Agile software delivery practices, and while we can’t imagine going back to a Waterfall way of working, he addressed the challenges Agile brings to the software testing life cycle. Each iteration brings more room for error across analysis, maintenance, and validation of tests. This is why testers should turn toward AI-powered test automation, with the help of tools like Applitools, to help ease the pain of Agile testing.

The session is aimed at helping testers understand the importance of leveraging AI technology for successful test automation, as well as empowering them to become more effective in their roles. Watch now.

From Technical Debt to Technical Capital by Denali Lumma

In this session, Denali Lumma from Modular dived into the concept of technical debt and proposed a new perspective on how we view it – technical capital. She walked attendees through key mathematical concepts that help calculate technical capital, as well as examples comparing Pytorch vs. TensorFlow, MySQL vs.Postgres, Frameworks vs. Code Editors, and more.

Attendees gained insights into calculating technical capital and how it can impact the valuation of a company. Watch now.

Automating Quality: A Vision Beyond AI for Testing by Tariq King

Tariq King of EPAM Systems took attendees on a journey through the evolution of software testing and how it has been impacted by generative AI. He shared his vision for the future of automated quality, one that looks beyond just AI to also prioritize creativity and experimentation. Tariq emphasized the need for quality and not just using AI to “go faster.” The more quality you have, the more productive you will be.

Tariq also dove into the ethical implications of using AI for testing and how it can be used for good or evil. Watch the full session.

Leveraging ChatGPT with Cypress for API Testing: Hands-On Techniques by Anna Patterson

In this session, Anna Patterson of EVERFI explored practical techniques and provided hands-on examples of how to harness the combined power of Cypress and ChatGPT to create robust API tests for your applications.

Anna guided us through writing descriptive and clear test prompts using HTML status codes, with a pet store website as an example. She showed in real-time how meaningful prompts in ChatGPT can help you create a solid API test suite, while also considering the security requirements of your company. Watch now.

PANEL—Testing in the AI Era: Opportunities, Hurdles, and the Evolving Role of Engineers

Joe Colantonio, Test Guild • Janna Loeffler, mParticle • Dave Piacente, Applitools • Stephen Williams, Accenture

As the use of AI in software development continues to grow, it is important for engineers and testers to stay ahead of the curve. In this panel discussion led by Joe Colantonio from Test Guild, Janna Loeffler from mParticle, Dave Piacente from Applitools, and Stephen Williams from Accenture came together to discuss the current state of AI implementation and its impact on testing.

They talked about how AI is still in its early stages of adoption and why there may always be some level of distrust in AI technology. The panel emphasized the importance of first understanding why you might implement AI in your testing strategy so that you can determine what the technology will help to solve vs. jumping in right away. Many more incredible takes and insights were shared in this interactive session! Watch now.

The Fear Factor with Richard Bradshaw

The Friendly Tester, Richard Bradshaw, addressed the common fears about AI and automation in testing. Attendees heard Richard’s open and honest discussion on the challenges and concerns surrounding AI and automation in testing. Ultimately, he calmed many fears around AI and gave attendees a better understanding of how they can begin to use it in their organization and to their own advantage. Watch now.

Tests Too Slow? Rethink CI! by Simon Stewart

Simon Stewart from the Selenium Project discussed the latest updates on how to speed up your testing process and improve the reliability of your CI runs. He shared insights into the challenges and tradeoffs involved in this process, as well as what is to come with Selenium and Bazel.
Attendees learned how to rethink their CI approach and use these tools to get faster feedback and more reliable testing results. Watch now.

Revolutionizing Testing: Empowering Manual Testers with AI-Driven Automation by Dmitry Vinnik

Dmitry Vinnik explored how AI-driven automation is revolutionizing the testing process for manual testers. He showed how Applitools’ Visual AI and Preflight help streamline test maintenance and reduce the need for coding.

Dmitry shared the importance of test maintenance, no code solutions for AI testing, and a first-hand look at Applitools Preflight. Watch this session to better understand how AI is transforming testing and empowering manual testers to become more effective in their roles. Watch the full session.

Keynote—Where Is My Flying Car?! Test Automation in the Space Age by Angie Jones

In her closing keynote, Angie Jones of Block took us on a trip into the future to see how science fiction has influenced the technology we have today. The Jetsons predicted many futuristic inventions such as robots, holograms, 3D printing, smart devices, and drones. We will explore these predictions and see how far we have come regarding automation and technology in the testing space.

As technology continues to evolve, it is important for testers to stay updated and adapt their strategies accordingly. Angie dove into the exciting world of tech innovation and imagined the future for test automation in the space age. Watch now.


Visit the full Future of Testing: AI in Automation on-demand archive to watch now and learn actionable steps to implement AI in your software testing strategy, key considerations before you start, other ideas around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>
AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Driving Successful Test Automation at Scale: Key Insights https://applitools.com/blog/driving-successful-test-automation-at-scale-key-insights/ Mon, 25 Sep 2023 13:30:00 +0000 https://applitools.com/?p=52139 Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their...

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>

Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their insights for overcoming common challenges. Here are their top recommendations.

Establish clear processes for collaboration.
Daily standups, sprint planning, and retrospectives are essential for enabling communication across distributed teams. “The only way that you can build a quality product that actually satisfies the business requirements is [through] that environment where you’ve got the different teams coming together,” said Ariola Qeleposhi, Test Automation Lead at Accenture.

Choose tools that meet current and future needs.
Consider how tools will integrate and the skills required to use them. While a “one-size-fits-all” approach may seem appealing, it may not suit every team’s needs. Think beyond individual products to the overall solution, advised Anand Bagmar, Senior Solution Architect at Applitools. Each product team should have a test pyramid, and tests should run at multiple levels to get real value from your automation.

Start small and build a proof of concept.
Demonstrate how automation reduces manual effort and finds defects faster to gain leadership buy-in. “Proof of concepts will really help to provide a form of evidence in a way to say that, okay, this is our product, this is how we automate or can potentially automate, and what we actually save from that,” said Qeleposhi.

Consider a “quality strategy” not just a “test strategy.”
Involve all roles like business, product, dev, test, and DevOps. “When you think about it as quality, then the role does not matter,” said Bagmar.

Leverage AI and automation as “seatbelts,” not silver bullets.
They enhance human judgment rather than replace it. “Automation is a lot, at least in this instance, it’s like a seatbelt. You don’t think you’ll need it, but when you need it you better have it,” said Kyle Penniston, Senior Software Developer at Bayer.

Build, buy, and reuse.
Don’t reinvent the wheel. Use open-source tools and existing frameworks. “There will be great resources that you can use. Open-source resources, for example, frameworks that might be there that you can use to quickly get started and build on top of that,” said Bagmar.

Provide learning resources for new team members.
For example, Applitools offers Test Automation University with resources for developing automation skills.

Measure and track metrics to ensure value.
Look at reduced manual testing, faster defect finding, test coverage, and other KPIs. “You need to get some metrics really, and then you need to use that from an automation side of things,” said Qeleposhi.

The key to building a solid foundation for scaling test automation is taking an iterative, collaborative approach focused on delivering value and enhancing quality. With the right strategies and tools in place, teams can overcome common challenges and achieve automation success. Watch the full recording.

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>
Functional Testing’s New Friend: Applitools Execution Cloud https://applitools.com/blog/functional-testings-new-friend-applitools-execution-cloud/ Mon, 11 Sep 2023 19:59:03 +0000 https://applitools.com/?p=51735 Dmitry Vinnik explores how the Execution Cloud and its self-healing capabilities can be used to run functional test coverage.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>

In the fast-paced and competitive landscape of software development, ensuring the quality of applications is of utmost importance. Functional testing plays a vital role in verifying the robustness and reliability of software products. With the increasing complexity of applications with a long list of use cases and the need for faster release cycles, organizations are challenged to conduct thorough functional testing across different platforms, devices, and screen resolutions. 

This path to a better quality of software products is where Applitools, a leading provider of functional testing solutions, becomes a must-have tool with its innovative offering, the Execution Cloud.

Applitools’ Execution Cloud is a game-changing platform that revolutionizes functional testing practices. By harnessing the power of cloud computing, the Execution Cloud eliminates the need for resource-heavy local infrastructure, providing organizations with enhanced efficiency, scalability, and reliability in their testing efforts. The cloud-based architecture integrates with existing testing frameworks and tools, empowering development teams to execute tests across various environments effortlessly.

This article explores how the Execution Cloud and its self-healing capabilities can be used to run our functional test coverage. We demonstrate this cloud platform’s features, like auto-fixing selectors caused by a change in the production code. 

Why Execution Cloud

As discussed, the Applitools Execution Cloud is a great tool to enhance any team’s quality pipeline.

One of the main features of this cloud platform is that it can “self-heal” our tests using AI. For example, if, during refactoring or debugging, one of the web elements had its selectors changed and we forgot to update related test coverage, the Execution Cloud would automatically fix our tests. This cloud platform would use one of the previous runs to deduce another relevant selector and let our tests continue running. 

This self-healing capability of the Execution Cloud allows us to focus on actual production issues without getting distracted by outdated tests. 

Functional Testing and Execution Cloud

It’s fair to say that Applitools has been one of the leading innovators and pioneers in visual testing with its Eyes platform. However, with the Execution Cloud in place, Applitools offers its users broader, more scalable test capabilities. This cloud platform lets us focus on all types of functional testing, including non-Visual testing.

One of the best features of the Execution Cloud is that it’s effortless to integrate into any test case with just one line. There is also no requirement to use the Applitools Eyes framework. In other words, we can run any functional test without creating screenshots for visual validation while utilizing the self-healing capability of the Execution Cloud.

Adam Carmi, Applitools CTO, demos the Applitools Execution Cloud and explores how self-healing works under the hood in this on-demand session.

Writing Test Suite

As we mentioned earlier, the Execution Cloud can be integrated with most test cases we already have in place! The only consideration is at the time of writing this post, the current version of the Execution Cloud only supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. However, more test frameworks will be supported in the near future.

Fortunately, Selenium is a highly used testing framework, giving us plenty of room to demonstrate the power of the Execution Cloud and functional testing.

Setting Up Demo App

Our demo application will be a documentation site built using the Vercel Documentation template. It’s a simple app that uses Next.js, a React framework created by Vercel, a cloud platform that lets us deploy web apps quickly and easily.

To note, all the code for our version of the application is available here.

First, we need to clone the demo app’s repository: 

git clone git@github.com:dmitryvinn/docs-demo-app.git

We will need Node.js of version 10.13 to work with this demo app, which can be installed by following the steps here.

After we set up Node.js, we should open a terminal and run the following command to install the necessary dependencies:

npm install

The next step is to navigate into the project’s directory and start the app locally:

cd docs-demo-app

npm run dev

Now our demo app is accessible at ‘http://localhost:3000/’ and ready to be tested.

Docs Demo App 

Deploying Demo App

While the Execution Cloud allows us to run the tests against a local deployment, we will simulate the production use case by running our demo app on Vercel. The steps for deploying a basic app are very well outlined here, so we won’t spend time reviewing them. 

After we deploy our demo app, it will appear as running on the Vercel Dashboard:

Demo App Deployed on Vercel

Now, we can write our tests for a production URL of our demo application available at `https://docs-demo-app.vercel.app/`.

Setting Up Test Automation

Execution Cloud offers great flexibility when it comes to working with our tests. Rather than re-writing our test suites to run against this self-healing cloud platform, we simply need to update a few lines of code in the setup part of our tests, and we can use the Execution Cloud. 

For our article, our test case will validate navigating to a specific page and pressing a counter button. 

To make our work even more effortless, Applitools offers a great set of quickstart examples that were recently updated to support the Execution Cloud. We will start with one of these samples using JavaScript with Selenium WebDriver and Jest as our baseline.

We can use any Integrated Development Environment (IDE) to write tests like IntelliJ IDEA or Visual Studio Code. Since we use JavaScript as our programming language, we will rely on NPM for the build system and our test runner.

Our tests will use Jest as its primary testing framework, so we must add a particular configuration file called `jest.config.js`. We can copy-paste a basic setup from here, but in its shortest form, the required configurations are the following.

module.exports = {

    clearMocks: true,

    coverageProvider: "v8",

  };

Our tests will require a `package.json` file which should include Jest, Selenium WebDriver, and Applitools packages. Our dependencies’ part of the `package.json` file should eventually look like the one below:

"dependencies": {

      "@applitools/eyes-selenium": "^4.66.0",

      "jest": "^29.5.0",

      "selenium-webdriver": "^4.9.2"

    },

After we install the above dependencies, we are ready to write and execute our tests.

Writing the Tests

Since we are running a purely functional Applitools test with its Eyes disabled (meaning we do not have a visual component), we will need to initialize the test and have a proper wrap-up for it.

In `beforeAll()`, we can set our test batching and naming along with configuring an Applitools API key.

To enable Execution Cloud for our tests, we need to ensure that we activate this cloud platform on the account level. After that’s done, in our tests’ setup, we will need to initialize the WebDriver using the following code:

let url = await Eyes.getExecutionCloudUrl();

driver = new Builder().usingServer(url).withCapabilities(capabilities).build();

For our test case, we will open a demo app, navigate to another page, press a counter button, and validate that the click incremented the value of clicks by one.

describe('Documentation Demo App', () => {

…

    test('should navigate to another page and increment its counter', async () => {

       // Arrange - go to the home page

       await driver.get('https://docs-demo-app.vercel.app/');

       // Act - go to another page and click a counter button

        await driver.findElement(By.xpath("//*[text() = 'Another Page']")).click();

        await driver.findElement(By.className('button-counter')).click();

      // Assert - validate that the counter was clicked

        const finalClickCount = await driver.findElement(By.className('button-counter')).getText();

        await expect(finalClickCount).toContain('Clicked 1 times');

    }

…

Another critical aspect of running our test is that it’s a non-Eyes test. Since we are not taking screenshots, we need to tell the Execution Cloud when a test begins and ends. 

To start the test, we should add the following snippet inside the `beforeEach()` that will name the test and assign it to a proper test batch:

await driver.executeScript(

            'applitools:startTest',

            {

                'testName': expect.getState().currentTestName,

                'appName': APP_NAME,

                'batch': { "id": batch.getId() }

            }

        )

Lastly, we need to tell our automation when the test is done and what were its results. We will add the following code that sets the status of our test in the `afterEach()` hook:

await driver.executeScript('applitools:endTest', 

       { 'status': testStatus })

Now, our test is ready to be run on the Execution Cloud.

Running test

To run our test, we need to set the Applitools API key. We can do it in a terminal or have it set as a global variable:

export APPLITOOLS_API_KEY=[API_KEY]

In the above command, we need to replace [API_KEY] with the API key for our account. The key can be found in the Applitools Dashboard, as shown in this FAQ article.

Now, we need to navigate to the location where our tests are located and run the following npm test command in the terminal:

npm test

It will trigger the test suite that can be seen on the Applitools Dashboard:

Applitools Dashboard with Execution Cloud enabled

Execution Cloud in Action

It’s a well-known fact that apps go through a lifecycle. They get created, get bugs, change, and ultimately shut down. This ever-changing lifecycle of any app is what causes our tests to break. Whether it’s due to a bug or an accidental regression, it’s widespread for a test to fail after a change in an app.

Let’s say a developer working on a counter button component changes its class name to `button-count` from the original `button-counter`. There could be many reasons this change could happen, but nevertheless, these modifications to the production code are extremely common. 

What’s even more common is that the developer who made the change might forget or not find all the tests using the original class name, `button-counter`, to validate this component. As a result, these outdated tests would start failing, distracting us from investigating real production issues, which could significantly impact our users.

Execution Cloud and its self-healing capabilities were built specifically to address this problem. This cloud platform would be able to “self-heal” our tests that were previously running against a class name `button-counter`, and rather than failing these tests, the Execution Cloud would find another selector that hasn’t changed. With this highly scalable solution, our test coverage would remain the same and let us focus on correcting issues that are actually causing a regression in production.

Although we are running non-Eyes tests, the Applitools Dashboard still allows us to see several valuable materials, like a video recording of our test or to export WebDriver commands! 

Want to see more? Request a free trial of Applitools Execution Cloud.

Conclusion

Whether you are a small startup that prioritizes quick iterations, or a large organization that focuses on scale, Applitools Execution Cloud is a perfect choice for any scenario. It offers a reliable way for tests to become what they should be – the first line of defense in ensuring the best customer experience for our users.

With the self-healing capabilities of the Execution Cloud, we get to focus on real production issues that actively affect our customers. With this cloud platform, we are moving towards a space where tests don’t become something we accept as constantly failing or a detriment to our developer velocity. Instead, we treat our test coverage as a trusted companion that raises problems before our users do. 

With these functionalities, Applitools and its Execution Cloud quickly become a must-have for any developer workflow that can supercharge the productivity and efficiency of every engineering team.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
Welcome Back, Selenium Dave! https://applitools.com/blog/welcome-back-selenium-dave/ Tue, 05 Sep 2023 18:53:47 +0000 https://applitools.com/?p=51615 Let me tell you a story. It’s one I haven’t told before. But to do it, let’s first get acquainted. Hi – I’m Dave Piacente. You may know me from...

The post Welcome Back, Selenium Dave! appeared first on Automated Visual Testing | Applitools.

]]>
Dave Piacente

Let me tell you a story. It’s one I haven’t told before. But to do it, let’s first get acquainted.

Hi – I’m Dave Piacente. You may know me from a past life when I went by the name Dave Haeffner and my past works with Selenium. I’m the new DevRel and Head of Community at Applitools—Andy’s moved on to a tremendous bucket-list job opportunity elsewhere, and we wish him all the best! I’ve been working closely with him behind the scenes to learn the ropes to help make this a smooth transition and to ensure that all of the great work he’s done and the community he’s grown will continue to flourish. And to loosely paraphrase Shakespeare – A DevRel (or a Dave) by any other name would be just as sweet.

Now, about that story…

I used to be known for a thing – “Selenium Dave” as they would say. I worked hard to earn that rep. I had one aim, to be helpful. I was trying to solve a problem that vexed me early on in my career in test automation (circa 2009) when open-source test automation and grid providers were on a meteoric rise. The lack of clear and concise guidance on how to get started and grow into a mature test automation practice was profound. But the fundamentals weren’t that challenging to master (once you knew what they were), and the number of people gnashing their teeth as they white-knuckled their way through it was eye-popping.

So, back in 2011, after working in the trenches at a company as an SDET (back before that job title was a thing), I left to start out on my own with a mission to help make test automation simpler. It started simply enough with consulting. But then the dominos began to fall when I started organizing a local test automation meetup.

While running the meetup I realized I kept getting asked the same questions and offering the same answers, so I started jotting them down and putting them into blog posts which later became a weekly tip newsletter (Elemental Selenium, which eventually grew to a readership of 30,000 testers). Organically, that grew into enough content (and confidence) to write a book, The Selenium Guidebook.

I then stepped out of meetup organization and into organizing the Selenium conference, where I became the conference chair from 2014 to 2017. My work on the conference opened the door for me to become part of the Selenium core team. From there it was a hop-skip-and-a-jump to working full-time as a contributor on Selenium IDE at Applitools.

Underpinning all of this, I was doing public speaking at meetups and conferences around the world (starting with my first conference talk back in 2010). I felt like I had summited the mountain—I was in the best possible position to be the most helpful. And I truly felt like I was making a difference in the industry.

But then I took a hard right turn and stopped doing it all. I felt like I had accomplished what I’d set out to do – I had helped make testing simpler (at least for people using Selenium). So I stepped down from the Selenium project, I stopped organizing the Selenium conference, I stopped doing public speaking, I sold my content business (e.g., the newsletter & book) to a third party, and I even changed my last name (from Haeffner to Piacente – although for reasons unrelated to my work). By all marks, I had closed that chapter of my life and was happily focusing on being a full-time Software Developer in the R&D team at Applitools.

While I was doing that, the test automation space continued to grow and evolve as I watched from the sidelines. Seemingly every enterprise was now shifting left (not just the more progressive ones), alternative open-source test automation frameworks to Selenium continued to gain ground in adoption, some new-and-noteworthy entrants started popping up, and the myriad of companies selling their wares in test automation seemed to grow exponentially. And then, Generative AI waltzed into the public domain like the Kool-Aid man busting through a wall. “Oh yeah!”

I started to realize that the initial problem I had strived to make a dent in—making testing simpler—was a moving target. Some things are far simpler now than when I started out, but some are more complex. There are new problems constantly emerging, and the ground underneath our feet is shifting.

So perhaps my work is not done. Perhaps there is more that I can do to help make test automation simpler. To return to public speaking and content creation. To return to being helpful. But this time, with the full weight of a company behind me, instead of as just as a one-man show.

I’m thrilled to be back, and I’m excited for what’s to come!

The post Welcome Back, Selenium Dave! appeared first on Automated Visual Testing | Applitools.

]]>
Power Up Your Test Automation with Playwright https://applitools.com/blog/power-up-your-test-automation-with-playwright/ Thu, 31 Aug 2023 12:53:00 +0000 https://applitools.com/?p=52108 As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust...

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
Locator Strategies with Playwright

As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.

In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:

Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.

Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.

Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.

Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.

Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.

Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.

Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help https://applitools.com/blog/ai-powered-test-automation-how-github-copilot-and-applitools-can-help/ Tue, 22 Aug 2023 21:23:00 +0000 https://applitools.com/?p=51789 Test automation is crucial for any software engineering team to ensure high-quality releases and a smooth software development lifecycle. However, test automation efforts can often be tedious, time-consuming, and require...

The post AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help appeared first on Automated Visual Testing | Applitools.

]]>
Can AI Autogenerate and Run Automated Tests?

Test automation is crucial for any software engineering team to ensure high-quality releases and a smooth software development lifecycle. However, test automation efforts can often be tedious, time-consuming, and require specialized skills. New AI tools are emerging that can help accelerate test automation, handle flaky tests, increase test coverage, and improve productivity.

In a recent webinar, Rizel Scarlett and Anand Bagmar discussed how to leverage AI-powered tools like GitHub Copilot and Applitools to boost your test automation strategy.

GitHub Copilot can generate automated tests.

By providing code suggestions based on comments and prompts, Copilot can help quickly write test cases and accelerate test automation development. For example, a comment like “validate phone number” can generate a full regular expression in seconds. Copilot also excels at writing unit tests, which many teams struggle to incorporate efficiently.

Applitools Execution Cloud provides self-healing test capabilities.

The Execution Cloud allows you to run tests in the cloud or on your local machine. With self-healing functionality, tests can continue running successfully even when there are changes to web elements or locators. This helps reduce flaky tests and maintenance time. Although skeptical about self-healing at first, the experts found Applitools to handle updates properly without clicking incorrect elements.

Together, tools like Copilot and Applitools can transform your test automation.

Copilot generates the initial test cases and Applitools provides a self-healing cloud environment to run them. This combination leads to improved productivity, reduced flaky tests, and increased coverage.

Applitools Eyes and Execution Cloud offer innovative AI solutions for automated visual testing. By leveraging new technologies like these, teams can achieve test automation at scale and ship high-quality software with confidence. To see these AI tools in action and learn how they can benefit your team, watch the full webinar recording.

The post AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help appeared first on Automated Visual Testing | Applitools.

]]>
Welcome Preflight To The Applitools Family https://applitools.com/blog/welcome-preflight-to-the-applitools-family/ Thu, 29 Jun 2023 15:32:12 +0000 https://applitools.com/?p=51159 We're thrilled to announce the acquisition of Preflight by Applitools!

The post Welcome Preflight To The Applitools Family appeared first on Automated Visual Testing | Applitools.

]]>

We are excited to share some fantastic news with our valued customers and the broader testing community. Applitools has acquired Preflight, a pioneering no-code platform that streamlines the creation, execution, and management of complex end-to-end tests. This acquisition marks a significant step in our journey to provide you with breakthrough technology that empowers your teams to increase test coverage, reduce test execution time, and deliver superior applications that your customers will love.

Introducing Applitools Preflight

Preflight is a robust no-code testing tool that empowers teams of all skill levels to automate complex testing scenarios. It runs these tests at an impressive scale across various browsers and screen sizes. Preflight’s user-friendly web recorder captures every element accurately and includes a data generator to simulate even the most complex test cases. This is a game-changer for manual testers, QA engineers, and product teams as it empowers them to automate test scenarios regardless of their skillset, effectively multiplying their QA abilities instantly.

Preflight ensures businesses achieve the test coverage necessary to consistently delight customers with each new experience, all without writing a single line of code.

The Benefits of Preflight

Simplified Test Creation: With Preflight, anyone on the team can create and run tests, democratizing the testing process. This inclusivity leads to more thorough testing and faster feedback cycles.

Expanded Test Coverage: Preflight enables teams to create comprehensive test suites that cover more functionality in less time. It can easily create UI tests, API tests, verify emails during sign-up, generate synthetic data, and more. This means teams can test more scenarios and edge cases that may have been overlooked with manual testing or traditional automated testing.

Enhanced Maintainability and Reusability: Preflight allows customers to reuse sections of test suites, workflows, login profiles, data, and more across different tests, reducing redundancy. It also simplifies test maintenance with a powerful test editor and live test replay that makes editing tests fast and intuitive, reducing one of the biggest gripes of record-and-replay tools.

The Future of Applitools and Preflight

While Preflight will continue to be available as a standalone product, we are actively integrating it into the Applitools platform to bring Visual AI to the masses! To get an exclusive first look at Preflight today, we invite you to sign up for a demo with one of our engineers.

The post Welcome Preflight To The Applitools Family appeared first on Automated Visual Testing | Applitools.

]]>
The Ultimate Guide To End-to-End Testing With Cypress https://applitools.com/blog/the-ultimate-guide-to-end-to-end-testing-with-cypress/ Mon, 19 Jun 2023 16:54:31 +0000 https://applitools.com/?p=51057 A guide to the anatomy of the Cypress framework, how it compares to other frameworks, and why it's so popular!

The post The Ultimate Guide To End-to-End Testing With Cypress appeared first on Automated Visual Testing | Applitools.

]]>

Today’s software applications are getting more complicated, thus every testing team needs to focus on expanding test coverage. To achieve this goal, it is important to use a combination of testing types, such as unit testing, integration testing, system testing, and end-to-end testing, depending on the software application’s complexity and requirements.

End-to-End (E2E) testing is designed to ensure that all components of a software application are working together correctly and that the system as a whole meets the desired functionality, performance, and reliability requirements.

Cypress is a popular open-source end-to-end testing framework for web applications. It is designed to make the testing process easier and more efficient for developers. One of the unique features of Cypress is that it runs the tests within the browser, which means that it can provide better control and visibility over the application under test.
In this blog on End to End testing, we will deep dive into performing Cypress End to End testing on a local Cypress grid and will explain how to start automating visual tests with Applitools Eyes and the Ultrafast Grid using Cypress in JavaScript.

What is End to End Testing?

End-to-end (E2E) testing is a software testing strategy that verifies an application’s complete flow from beginning to end. It is a type of functional testing that tests the application’s behavior as a complete system, rather than testing individual components in isolation.

E2E testing simulates a real user scenario and covers all aspects of the application, including user interfaces, APIs, databases, and other integrations. It typically involves testing multiple components of an application to ensure that they work together as expected and fulfill the requirements of the business or end-users.

E2E testing is typically performed after other types of testing, such as unit testing and integration testing, have been completed. It is used to validate that the entire system works together seamlessly and to identify any issues that may have been missed in earlier stages of testing.

Why is end-to-end testing necessary?

End-to-end testing (E2E testing) is a type of software testing that tests the entire system or application from start to finish, simulating real-world user scenarios.

Unit testing alone is not enough to ensure the quality and reliability of software. While unit testing is an important part of the testing process, it only verifies the behavior of individual components or modules of the software in isolation. It does not guarantee that the software will work correctly when integrated with other components or modules.

This is where integration testing enters into the picture. Integration testing focuses on testing the interaction between two or more components of a system, to ensure that they work together correctly. However, even if all the individual components pass integration testing, there may still be issues with the overall system when all the components are put together. This is where end-to-end testing comes in – it tests the entire system from start to finish.

Cypress is a popular automation testing framework that is designed specifically for end-to-end testing. It runs tests directly in the browser, allowing it to provide an experience that is similar to how users interact with the application. This makes it easier to identify any issues that users might face, as the testing environment is as close to the real-world experience as possible.

To understand End to End testing, Let’s take a closer look at Mike Cohn’s test automation pyramid. We routinely do each level of testing listed in this pyramid while running automated Cypress testing.

Testing Pyramid Layers

The automation pyramid is a popular framework introduced by Mike Cohn that helps teams to plan and prioritize their testing efforts. It includes three levels of testing, which are:

  1. Unit Tests: At the base of the pyramid are the unit tests, which test individual code components such as functions, methods, and classes. Unit tests are typically written by developers and are executed frequently during the development cycle. They are essential in ensuring that individual components of the application work as expected and can catch issues early in the development process.
  1. Integration Tests: The middle layer of the pyramid consists of integration tests, which test how different components of the system work together. Integration tests ensure that the various parts of the application can communicate and interact with each other seamlessly. These tests are typically automated and are executed after the unit tests have passed.
  1. End-to-End Tests: The top layer of the pyramid is end-to-end testing, which tests the entire application workflow from start to finish. These tests simulate real user scenarios and help ensure that the application functions as expected in a production environment. End-to-end tests are typically automated and are executed less frequently than the lower level tests.

Benefits of End-to-End Testing

There are several benefits of End to End testing. Some of the benefits of E2E testing include:

  1. Increased Confidence: E2E testing provides a higher level of confidence in the software application by testing all components together. This testing approach ensures that all the components are integrated correctly and are working as expected.
  2. Improved Quality: By testing the application from end-to-end, helps to identify and fix bugs earlier in the development process. This enhances the overall quality of the software.
  3. Enhanced User Experience: E2E testing ensures that the application is working as expected for the end user. This helps to provide a better user experience and can lead to increased customer satisfaction.
  4. Time and Cost Savings: E2E testing helps to identify issues early in the development cycle, which can save time and money by reducing the need for costly rework later in the process.
  5. Better Collaboration: E2E testing promotes better collaboration between different teams working on the same application. This testing approach helps to identify issues that may be caused by a lack of communication between teams.
  6. Increased Productivity: By automating the testing process, E2E testing can help to increase productivity by reducing the time and effort required to manually test the application.

Faster Time-to-Market: By catching defects earlier in the development process, end-to-end testing can help to reduce delays and accelerate the time-to-market of the application.

Frameworks for End to End testing

There are several popular frameworks for end-to-end testing, including:

Cypress

Cypress is a JavaScript-based end-to-end testing framework that provides a simple and intuitive API for testing web applications. Cypress supports modern web development technologies like React, Angular, Vue.js, and more. It provides a built-in test runner, and it runs tests in the browser, which makes it fast and reliable.

Cypress runs tests inside the browser; it also provides detailed information about what’s happening at every step of the test, including network requests, console output, and DOM changes. This makes it easier to identify and troubleshoot issues and helps ensure that the application is working as intended.

Cypress Trends on GitHub

The following information is taken from the official website of Cypress GitHub repository:

  • Stars: 43.3k
  • Forks: 2.8k
  • Used By: 797k
  • Releases: 303
  • Contributors: 427

WebdriverIO

WebdriverIO is a popular open-source testing framework for Node.js that allows developers to automate web applications in a simple and efficient way. It uses the WebDriver API to communicate with browsers and supports a variety of testing frameworks, including Mocha, Jasmine, and Cucumber.

.WebdriverIO Trends on GitHub

The following information is taken from the official website of WebdriverIO GitHub repository:

  • Stars: 8.1k
  • Forks: 2.3k
  • Used By: 50.5k
  • Releases: 305
  • Contributors: 491

Nightwatch.js

Nightwatch.js is an open-source Node.js-based end-to-end testing framework used to automate browser testing. It provides a simple and easy-to-use syntax for writing automated tests in JavaScript and allows you to run tests in real web browsers like Chrome, Firefox, and Safari.

Nightwatch.js uses the WebDriver protocol to communicate with the browser and control its behavior. It also includes a powerful built-in assertion library that makes it easy to write test assertions and helps you quickly identify issues with your web application.

Nightwatch.js Trends on GitHub

The following information is taken from the official website of Nightwatch.js GitHub repository:

  • Stars: 11.4k
  • Forks: 1.1k
  • Used By: 142k
  • Releases: 219
  • Contributors: 112

Protractor

Protractor is an open-source end-to-end testing framework for Angular and AngularJS applications. It is built on top of WebDriverJS and uses Jasmine syntax for writing test scripts. Protractor is designed to simulate user interactions with the application and to verify that the application behaves as expected.

Protractor Trends on GitHub

The following information is taken from the official website of Protractor GitHub repository:

  • Stars: 8.8k
  • Forks: 2.4k
  • Used By: 1.9m
  • Contributors: 250

TestCafe

TestCafe is an open-source end-to-end testing framework that allows you to automate web testing without using browser plugins. TestCafe is built on top of Node.js and provides a simple and powerful API for testing web applications.

TestCafe Trends on GitHub

The following information is taken from the official website of TestCafe GitHub repository:

  • Stars: 9.6k
  • Forks: 677
  • Used By: 12.3k
  • Releases: 390
  • Contributors: 117

Benefits End to End Testing Using Cypress

Here are some of the features of Cypress End to End testing:

  1. Easy Setup: Cypress has a simple setup process that doesn’t require any additional drivers or libraries. You can get started with Cypress by installing a single package.
  2. Automatic Waiting: Cypress automatically waits for elements to appear and become intractable before executing commands. This ensures that the tests are not affected by the timing of the application’s response.
  3. Real-time Reloads: Cypress provides real-time reloads, which means that as you make changes to your code or tests, the application will automatically reload, and the tests will be re-run.
  4. Interactive Debugging: Cypress provides an interactive test runner, which allows you to debug your tests by stepping through them, setting breakpoints, and viewing the application’s state at any point in time.
  5. Time Travel: Cypress allows you to go back and forth in time to see what happened during the execution of a test. This feature is useful for debugging and understanding the behavior of your application.
  6. Cross-browser Testing: Cypress allows you to run your tests on multiple browsers and viewports simultaneously. This helps you ensure that your application works correctly across different environments.
  7. Network Traffic Control: Cypress allows you to control the network traffic of your application. You can stub, spy, and mock network requests to simulate different scenarios.
  8. Automatic screenshots and videos: Cypress automatically takes screenshots and records videos of your tests, which makes it easy to see what went wrong when a test fails.

Frameworks for End to End testing

There are several popular frameworks for end-to-end testing, including:

Cypress

Cypress is a JavaScript-based end-to-end testing framework that provides a simple and intuitive API for testing web applications. Cypress supports modern web development technologies like React, Angular, Vue.js, and more. It provides a built-in test runner, and it runs tests in the browser, which makes it fast and reliable.

Cypress runs tests inside the browser; it also provides detailed information about what’s happening at every step of the test, including network requests, console output, and DOM changes. This makes it easier to identify and troubleshoot issues and helps ensure that the application is working as intended.

Cypress Trends on GitHub

The following information is taken from the official website of Cypress GitHub repository:

  • Stars: 43.3k
  • Forks: 2.8k
  • Used By: 797k
  • Releases: 303
  • Contributors: 427

WebdriverIO

WebdriverIO is a popular open-source testing framework for Node.js that allows developers to automate web applications in a simple and efficient way. It uses the WebDriver API to communicate with browsers and supports a variety of testing frameworks, including Mocha, Jasmine, and Cucumber.

.WebdriverIO Trends on GitHub

The following information is taken from the official website of WebdriverIO GitHub repository:

  • Stars: 8.1k
  • Forks: 2.3k
  • Used By: 50.5k
  • Releases: 305
  • Contributors: 491

Nightwatch.js

Nightwatch.js is an open-source Node.js-based end-to-end testing framework used to automate browser testing. It provides a simple and easy-to-use syntax for writing automated tests in JavaScript and allows you to run tests in real web browsers like Chrome, Firefox, and Safari.

Nightwatch.js uses the WebDriver protocol to communicate with the browser and control its behavior. It also includes a powerful built-in assertion library that makes it easy to write test assertions and helps you quickly identify issues with your web application.

Nightwatch.js Trends on GitHub

The following information is taken from the official website of Nightwatch.js GitHub repository:

  • Stars: 11.4k
  • Forks: 1.1k
  • Used By: 142k
  • Releases: 219
  • Contributors: 112

Protractor

Protractor is an open-source end-to-end testing framework for Angular and AngularJS applications. It is built on top of WebDriverJS and uses Jasmine syntax for writing test scripts. Protractor is designed to simulate user interactions with the application and to verify that the application behaves as expected.

Protractor Trends on GitHub

The following information is taken from the official website of Protractor GitHub repository:

  • Stars: 8.8k
  • Forks: 2.4k
  • Used By: 1.9m
  • Contributors: 250

TestCafe

TestCafe is an open-source end-to-end testing framework that allows you to automate web testing without using browser plugins. TestCafe is built on top of Node.js and provides a simple and powerful API for testing web applications.

TestCafe Trends on GitHub

The following information is taken from the official website of TestCafe GitHub repository:

  • Stars: 9.6k
  • Forks: 677
  • Used By: 12.3k
  • Releases: 390
  • Contributors: 117

Benefits End to End Testing Using Cypress

Here are some of the features of Cypress End to End testing:

  1. Easy Setup: Cypress has a simple setup process that doesn’t require any additional drivers or libraries. You can get started with Cypress by installing a single package.
  2. Automatic Waiting: Cypress automatically waits for elements to appear and become intractable before executing commands. This ensures that the tests are not affected by the timing of the application’s response.
  3. Real-time Reloads: Cypress provides real-time reloads, which means that as you make changes to your code or tests, the application will automatically reload, and the tests will be re-run.
  4. Interactive Debugging: Cypress provides an interactive test runner, which allows you to debug your tests by stepping through them, setting breakpoints, and viewing the application’s state at any point in time.
  5. Time Travel: Cypress allows you to go back and forth in time to see what happened during the execution of a test. This feature is useful for debugging and understanding the behavior of your application.
  6. Cross-browser Testing: Cypress allows you to run your tests on multiple browsers and viewports simultaneously. This helps you ensure that your application works correctly across different environments.
  7. Network Traffic Control: Cypress allows you to control the network traffic of your application. You can stub, spy, and mock network requests to simulate different scenarios.
  8. Automatic screenshots and videos: Cypress automatically takes screenshots and records videos of your tests, which makes it easy to see what went wrong when a test fails.

Set up Cypress For End to End Testing

To create a new project for Cypress automated testing, follow the steps listed below.

Step 1: Generate package.json.

  • Create a project, let’s name it as cypress_applitools
  • Use the npm init command to create a package.json file

Step 2: Install Cypress.

Install Cypress by running the command in the newly created folder:

npm install cypress –save-dev

OR

yarn add cypress –dev

Above command will install Cypress locally as a dev dependency for your project.

As shown below, Cypress version 12.11.0 is reflected after installation. The newest Cypress version at the time this blog was being written was 12.11.0.

Below is a diagram of Cypress’s default folder layout. The “e2e” folder is where test cases can be created.

About Project structure of Cypress

Cypress has built a default folder hierarchy when it opens for the first time, as can be seen in the screenshots. Each of these files and folders that Cypress created is described in detail below.

  • e2e: All test cases are stored under this folder. This folder contains the actual test files, written in JavaScript, that define the tests to be run.
  • Fixtures: This folder contains any data files that are needed for the tests, such as JSON or CSV files.
  • Support: There are two files inside the support folder: commands.js and e2e.js
    • command.js: Is the file where your frequently used functions and unique commands are added. It has functions like the login function that you may use in various tests. You can alter some of the functions Cypress generated for you right here.
    • e2e.js: This file is executed before each and every spec file. This file is an excellent location for global configuration and behavior that alters Cypress in the same way as before or before. It just imports commands.js by default, but you can import or need more files to keep things organized.
  • Node_Modules: The node_modules directory will have all the node packages installed and all test files will have access to them. When you install Node packages using NPM, they are downloaded and installed in the node_modules directory, which is located in the root directory of your project
  • cypress.config.json: cypress.config.json is a configuration file used by Cypress to override the default configuration settings for a project. It is similar to cypress.json, but it is intended to be used as a per-environment configuration file.

Some examples of configuration options that can be set in cypress.config.json include:

  • baseUrl: The base URL for the application being tested.
  • testFiles: A list of test files to include or exclude from the test suite.
  • video: Configuration options for Cypress video recording.
  • screenshots: Configuration options for Cypress screenshots.

Basic constructs of Cypress

Cypress used Mocha’s syntax for developing test cases. Key constructs that are frequently used in Cypress test development are listed below.

  • describe(): This method is used in Cypress (using Mocha’s syntax) to group together related test cases. It takes two arguments. It takes two arguments: A string that describes the group of test cases (e.g. “Login Page Tests”) and another argument, a callback function that contains the individual test cases (using the it() method).
  • it(): This method is used to define an individual test case. It requires two arguments: a string that specifies the test scenario and a callback function that has the test code itself.
  • before():  This method is used to run the code under before() block before any test case. The before() method takes one argument: a callback function that contains the setup code to be executed before any of the test cases
  • after(): This method is used to run a cleanup once all the test cases are executed. The after() method takes one argument: a callback function that contains the cleanup code to be executed after all the test cases
  • beforeEach(): This method is used to run the code under beforeEach() block beforeEach.The beforeEach() method takes one argument: a callback function that contains the code to be executed before each test case
  • afterEach(): This method is used to run a cleanup function after each test case. The afterEach() function takes one argument: a callback function that contains the cleanup code to be executed after each test case
  • .only(): It is used to run a specified suite or test exclusively, ignoring all other tests and suites. This can be useful when you’re debugging a specific test case or working on a specific suite of tests, and you want to focus on that specific test case or suite without running any others.
  • .skip(): It is used to skip a specified suite or test, effectively ignoring it during test execution. This can be useful when you’re working on a test suite or test case that isn’t ready to be run yet, or when you want to temporarily disable a test without deleting it.

The post The Ultimate Guide To End-to-End Testing With Cypress appeared first on Automated Visual Testing | Applitools.

]]>
Add self-healing to your Selenium tests with Applitools Execution Cloud https://applitools.com/blog/add-self-healing-to-your-selenium-tests-with-applitools-execution-cloud/ Tue, 06 Jun 2023 07:05:55 +0000 https://applitools.com/?p=50888 A tutorial to get you started with the Applitools Execution Cloud!

The post Add self-healing to your Selenium tests with Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>

Applitools just released an exciting new product: the Applitools Execution Cloud

The Applitools Execution Cloud is extraordinary. Like several other testing platforms (such as Selenium Grid), it runs web browser sessions in the cloud – rather than on your machine – to save you the hassle of scaling and maintaining your own resources. However, unlike other platforms, Execution Cloud will automatically wait for elements to be ready for interactions and then fix locators when they need to be updated, which solves two of the biggest struggles when running end-to-end tests. It’s the first test cloud that adds AI power to your tests with self-healing capabilities. It also works with open source tools like Selenium rather than proprietary “low-code-no-code” tools.

Execution Cloud can run any WebDriver-based test today, even ones that don’t use Applitools Eyes. Execution Cloud also works seamlessly with Applitools Ultrafast Grid, so tests can still cover multiple browser types, devices, and viewports. The combination of Execution Cloud with Ultrafast Grid enables functional and visual testing to work together beautifully!

I wanted to be one of the first engineers to give this new platform a try. The initial release supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. Future releases will support others like Cypress and Playwright. In this article, I’m going to walk through my first experiences with Execution Cloud using Selenium WebDriver in my favorite language – Python. Let’s go!

Starting with plain-old functional tests

Recently, I’ve been working on a little full-stack Python web app named Bulldoggy, the reminders app. Bulldoggy has a login page and a reminders page. It uses HTMX to handle dynamic interactions like adding, editing, and deleting reminder lists and items. (If you want to learn how I built this app, watch my PyTexas 2023 keynote.) Here are quick screenshots of the login and reminders pages:

The Bulldoggy login page.

The Bulldoggy reminders page.

Writing a test with Selenium

My testing setup for Bulldoggy is very low-tech: I run the app locally in one terminal, and I launch my tests against it from a second terminal. I wrote a fairly basic login test with Selenium WebDriver and pytest. Here’s the test code:

import pytest

from selenium.webdriver import Chrome, ChromeOptions
from selenium.webdriver.common.by import By


@pytest.fixture(scope='function')
def local_webdriver():
  options = ChromeOptions()
  driver = Chrome(options=options)
  yield driver
  driver.quit()


def test_login_locally(local_webdriver: Chrome):

  # Load the login page
  local_webdriver.get("http://127.0.0.1:8000/login")

  # Perform login
  local_webdriver.find_element(By.NAME, "username").send_keys('pythonista')
  local_webdriver.find_element(By.NAME, "password").send_keys("I<3testing")
  local_webdriver.find_element(By.XPATH, "//button[.='Login']").click()

  # Check the reminders page
  assert local_webdriver.find_element(By.ID, 'bulldoggy-logo')
  assert local_webdriver.find_element(By.ID, 'bulldoggy-title').text == 'Bulldoggy'
  assert local_webdriver.find_element(By.XPATH, "//button[.='Logout']")
  assert local_webdriver.title == 'Reminders | Bulldoggy reminders app'

If you’re familiar with Selenium WebDriver, then you’ll probably recognize the calls in this code, even if you haven’t used Python before. The local_webdriver function is a pytest fixture – it handles setup and cleanup for a local ChromeDriver instance. The test_login_locally function is a test case function that calls the fixture and receives the ChromeDriver instance via dependency injection. The test then loads the Bulldoggy web page, performs login, and checks that the reminders page loads correctly.

When I ran this test locally, it worked just fine: the browser window opened, the automation danced across the pages, and the test reported a passing result. That was all expected. It was a happy path, after all.

Hitting broken locators

Oftentimes, when making changes to a web app, we (or our developers) will change the structure of a page’s HTML or CSS without actually changing what the user sees. Unfortunately, this frequently causes test automation to break because locators fall out of sync. For example, the input elements on the Bulldoggy login page had the following HTML markup:

<input type="text" placeholder="Enter username" name="username" required />
<input type="password" placeholder="Enter password" name="password" required />

My test used the following locators to interact with them:

local_webdriver.find_element(By.NAME, "username").send_keys("pythonista")
local_webdriver.find_element(By.NAME, "password").send_keys("I<3testing")

My locators relied on the input elements’  name attributes. If I changed those names, then the locators would break and the test would crash. For example, I could shorten them like this:

<input type="text" placeholder="Enter username" name="user" required />
<input type="password" placeholder="Enter password" name="pswd" required />

What seems like an innocuous change on the front-end can be devastating for automated tests. It’s impossible to know if an HTML change will break tests without deeply investigating the test code or cautiously running the whole test suite to shake out discrepancies.

Sure enough, when I ran my test against this updated login page, it failed spectacularly with the following error message:

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[name="username"]"}

It was no surprise. The CSS selectors no longer found the desired elements.

A developer change like the one I showed here with the Bulldoggy app is only one source of fragility for locators. Many Software-as-a-Service (SaaS) applications like Salesforce and even some front-end development frameworks generate element IDs dynamically, which makes it hard to build stable locators. A/B testing can also introduce page structure variations that break locators. Web apps in development are always changing for one reason or another, making locators perpetually susceptible to failure.

Automatically healing broken locators

One of the most appealing features of Execution Cloud is that it can automatically heal broken locators. Instead of running your WebDriver session on your local machine, you run it remotely on Execution Cloud. In that sense, it’s like Selenium Grid or other popular cross-browser testing platforms. However, unlike those other platforms, it learns the interactions your tests take, and it can dynamically substitute broken locators for working ones whenever they happen. That makes your tests robust against flakiness for any reason: changes in page structure, poorly-written selectors, or dynamically-generated IDs.

Furthermore, Execution Cloud can run “non-Eyes” tests. A non-Eyes test is a traditional, plain-old functional test with no visual assertions or “visual testing.” Our basic login test is a non-Eyes test because it does not capture any checkpoints with Visual AI – it relies entirely on Selenium-based interactions and verifications.

I wanted to put these self-healing capabilities to the test with our non-Eyes test.

Setting up the project

To start, I needed my Applitools account handy (which you can register for free), and I needed to set my API key as the APPLITOOLS_API_KEY environment variable. I also installed the latest version of the Applitools Eyes SDK for Selenium in Python (eyes-selenium).

In the test module, I imported the Applitools Eyes SDK:

from applitools.selenium import *

I wrote a fixture to create a batch of tests:

@pytest.fixture(scope='session')
def batch_info():
  return BatchInfo("Bulldoggy: The Reminders App")

I also wrote another fixture to create a remote WebDriver instance that would run in Execution Cloud:

@pytest.fixture(scope='function')
def non_eyes_driver(
  batch_info: BatchInfo,
  request: pytest.FixtureRequest):

  options = ChromeOptions()
  options.set_capability('applitools:tunnel', 'true')

  driver = Remote(
    command_executor=Eyes.get_execution_cloud_url(),
    options=options)

  driver.execute_script(
    "applitools:startTest",
    {
      "testName": request.node.name,
      "appName": "Bulldoggy: The Reminders App",
      "batch": {"id": batch_info.id}
    }
  )
  
  yield driver

  status = 'Failed' if request.node.test_result.failed else 'Passed'
  driver.execute_script("applitools:endTest", {"status": status})
  driver.quit()

Execution Cloud setup requires a few extra things. Let’s walk through them together:

  • Since I’m running the Bulldoggy app on my local machine, I need to set up a tunnel between the remote session and my machine. There are two ways to do this. One way is to set up ChromeOptions with options.set_capability('applitools:tunnel', 'true'), which I put in the code above. If you don’t want to hardcode the Applitools tunnel setting, the second way is to set the APPLITOOLS_TUNNEL environment variable to True. That way, you could toggle between local web apps and publicly-accessible ones. Tunnel configuration is documented at the bottom of the Execution Cloud setup and installation page.
  • The WebDriver session will be a remote one in Execution Cloud. Instead of creating a local ChromeDriver instance, the test creates a remote instance using the Execution Cloud URL by calling driver = Remote(command_executor=Eyes.get_execution_cloud_url(), options=options).
  • Since this is a non-Eyes test, we need to explicitly indicate when a test starts and stops. The driver.execute_script call sends a "applitools:startTest" event with inputs for the test name, app name, and batch ID.
  • At the end of the test, we need to likewise explicitly indicate the ending with the test status. That’s the second driver.execute_script call. Then, we can quit the browser.

In order to get the test result from pytest using request.node.test_result, I had to add the following hook to my conftest.py file:

import pytest

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
  outcome = yield
  setattr(item, 'test_result', outcome.get_result())

This is a pretty standard pattern for pytest.

Updating the test case

The only change I had to make to the test case function was the fixture it called. The body of the function remained the same:

def test_login_with_execution_cloud(non_eyes_driver: Remote):

  # Load the login page
  non_eyes_driver.get("http://127.0.0.1:8000/login")

  # Perform login
  non_eyes_driver.find_element(By.NAME, "username").send_keys('pythonista')
  non_eyes_driver.find_element(By.NAME, "password").send_keys("I<3testing")
  non_eyes_driver.find_element(By.XPATH, "//button[.='Login']").click()

  # Check the reminders page
  assert non_eyes_driver.find_element(By.ID, 'bulldoggy-logo')
  assert non_eyes_driver.find_element(By.ID, 'bulldoggy-title').text == 'Bulldoggy'
  assert non_eyes_driver.find_element(By.XPATH, "//button[.='Logout']")
  assert non_eyes_driver.title == 'Reminders | Bulldoggy reminders app'

Running the test in Execution Cloud

I reverted the login page’s markup to its original state, and then I ran the test using the standard command for running pytest: python -m pytest tests. (I also had to set my APPLITOOLS_API_KEY environment variable, as previously mentioned.) Tests ran like normal, except that the browser session did not run on my local machine; it ran in the Execution Cloud.

To view the results, I opened the Eyes Test Manager. Applitools captured a few extra goodies as part of the run. When I scrolled all the way to the right and clicked the three-dots icon for one of the tests, there was a new option called “Execution Cloud details”. Under that option, there were three more options:

  1. Download video
  2. Download WebDriver commands
  3. Download console log

Execution Cloud details for a non-Eyes test.

The option that stuck out to me the most was the video. Video recordings are invaluable for functional test analysis because they show how a test runs in real time. Screenshots along the way are great, but they aren’t always helpful when an interaction goes wrong or just takes too long to complete. When running a test locally, you can watch the automation dance in front of your eyes, but you can’t do that when running remotely or in Continuous Integration (CI).

Here’s the video recording for one of the tests:

The WebDriver log and the console log can be rather verbose, but they can be helpful traces to investigate when something fails in a test. For example, here’s a snippet from the WebDriver log showing one of the commands:

{
  "id": 1,
  "request": {
    "path": "execute/sync",
    "params": {
      "wdSessionId": "9c65e0c2-6742-4bc1-a2af-4672166faf21",
      "*": "execute/sync"
    },
    "method": "POST",
    "body": {
      "script": "return (function(arg){\nvar s=function(){\"use strict\";var t=function(t){var n=(void 0===t?[]:t)[0],e=\"\",r=n.ownerDocument;if(!r)return e;for(var o=n;o!==r;){var a=Array.prototype.filter.call(o.parentNode.childNodes,(function(t){return t.tagName===o.tagName})).indexOf(o);e=\"/\"+o.tagName+\"[\"+(a+1)+\"]\"+e,o=o.parentNode}return e};return function(){var n,e,r;try{n=window.top.document===window.document||\"root-context\"===window.document[\"applitools-marker\"]}catch(t){n=!1}try{e=!window.parent.document===window.document}catch(t){e=!0}if(!e)try{r=t([window.frameElement])}catch(t){r=null}return[document.documentElement,r,n,e]}}();\nreturn s(arg)\n}).apply(null, arguments)",
      "args": [
        null
      ]
    }
  },
  "time": "2023-05-01T03:52:03.917Z",
  "offsetFromCreateSession": 287,
  "duration": 47,
  "response": {
    "statusCode": 200,
    "body": "{\"value\":[{\"element-6066-11e4-a52e-4f735466cecf\":\"ad7cff25-c2d8-4558-9034-b1727ed289d6\"},null,true,false]}"
  }
}

It’s pretty cool to see the Eyes Test Manager providing all these helpful testing artifacts.

Running the test with self-healing locators

After the first test run with Execution Cloud, I changed the names for those input fields:

<input type="text" placeholder="Enter username" name="user" required />
<input type="password" placeholder="Enter password" name="pswd" required />

The login page effectively looked the same, but its markup had changed. I also had to update these form values in the get_login_form_creds function in the app.utils.auth module.

I reran the test (python -m pytest tests), and sure enough, it passed! The Eyes Test Manager showed a little wand icon next to its name:

The wand icon in the Eyes Test Manager showing locators that were automatically healed.

The wand icon indicates that locators in the test were broken but Execution Cloud was able to heal them. I clicked the wand icon and saw this:

Automatically healed locators.

Execution Cloud changed the locators from using CSS selectors for the name attributes to using XPaths for the placeholder text. That’s awesome! With Applitools, the test overcame page changes so it could run to completion. Applitools also provided the “healed” locators it used so I could update my test code as appropriate.

Running tests with Execution Cloud and Ultrafast Grid together

Visual assertions backed by Visual AI can greatly improve the coverage of traditional functional tests, like our basic login scenario for the Bulldoggy app. If we scrutinize the steps we automated, we can see that (a) we didn’t check anything on the login page itself, and (b) we only checked the basic appearance of three elements on the reminders page plus the title. That’s honestly very shallow coverage. The test doesn’t check important facets like layout, placement, or color. We could add assertions to check more elements, but that would add more brittle locators for us to maintain as well as take more time to develop. Visual assertions could cover everything on the page implicitly with a one-line call.

We can use the Applitools Eyes SDK for Selenium in Python to add visual assertions to our Bulldoggy test. That would transform it from a “non-Eyes” test to an “Eyes” test, meaning it would use Applitools Eyes to capture visual snapshots and find differences with Visual AI in addition to making standard functional interactions. Furthermore, we can perform cross-browser testing with Eyes tests using Applitools Ultrafast Grid, which will re-render the snapshots it captures during testing on any browser configurations we declare.

Thankfully Execution Cloud and Ultrafast Grid can run Eyes tests together seamlessly. I updated my login test to make it happen.

Setting up Applitools Eyes

Setting up Applitools Eyes for our test will be no different than the setup for any other visual test you may have written with Applitools. I already created a fixture for the batch info, so I needed to add fixtures for the Ultrafast Grid runner and the browsers to test on the Ultrafast Grid:

@pytest.fixture(scope='session')
def runner():
  run = VisualGridRunner(RunnerOptions().test_concurrency(5))
  yield run
  print(run.get_all_test_results())


@pytest.fixture(scope='session')
def configuration(batch_info: BatchInfo):
  config = Configuration()
  config.set_batch(batch_info)

  config.add_browser(800, 600, BrowserType.CHROME)
  config.add_browser(1600, 1200, BrowserType.FIREFOX)
  config.add_browser(1024, 768, BrowserType.SAFARI)
  config.add_device_emulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT)
  config.add_device_emulation(DeviceName.Nexus_10, ScreenOrientation.LANDSCAPE)

  return config

In this configuration, I targeted three desktop browsers and two mobile browsers.

I also wrote a simpler fixture for creating the remote WebDriver session:

@pytest.fixture(scope='function')
def remote_webdriver():
  options = ChromeOptions()
  options.set_capability('applitools:tunnel', 'true')

  driver = Remote(
    command_executor=Eyes.get_execution_cloud_url(),
    options=options)

  yield driver
  driver.quit()

This fixture still uses the Execution Cloud URL and the tunnel setting, but since our login test will become an Eyes test, we won’t need to call execute_script to declare when a test begins or ends. The Eyes session will do that for us.

Speaking of which, I had to write a fixture to create that Eyes session:

@pytest.fixture(scope='function')
def eyes(
  runner: VisualGridRunner,
  configuration: Configuration,
  remote_webdriver: Remote,
  request: pytest.FixtureRequest):

  eyes = Eyes(runner)
  eyes.set_configuration(configuration)

  eyes.open(
    driver=remote_webdriver,
    app_name='Bulldoggy: The Reminders App',
    test_name=request.node.name,
    viewport_size=RectangleSize(1024, 768))
  
  yield eyes
  eyes.close_async()

Again, all of this is boilerplate code for running tests with the Ultrafast Grid. I copied most of it from the Applitools tutorial for Selenium in Python. SDKs for other tools and languages need nearly identical setup. Note that the fixtures for the runner and configuration have session scope, meaning they run one time before all tests, whereas the fixture for the Eyes object has function scope, meaning it runs one time per test. All tests can share the same runner and config, while each test needs a unique Eyes session.

Rewriting the test with visual assertions

I had to change two main things in the login test:

  1. I had to call the new remote_webdriver and eyes fixtures.
  2. I had to add visual assertions with Applitools Eyes.

The code looked like this:

def test_login_with_eyes(remote_webdriver: Remote, eyes: Eyes):

  # Load the login page
  remote_webdriver.get("http://127.0.0.1:8000/login")

  # Check the login page
  eyes.check(Target.window().fully().with_name("Login page"))

  # Perform login
  remote_webdriver.find_element(By.NAME, "username").send_keys('pythonista')
  remote_webdriver.find_element(By.NAME, "password").send_keys("I<3testing")
  remote_webdriver.find_element(By.XPATH, "//button[.='Login']").click()

  # Check the reminders page
  eyes.check(Target.window().fully().with_name("Reminders page"))
  assert non_eyes_driver.title == 'Reminders | Bulldoggy reminders app'

I actually added two visual assertions – one for the login page, and one for the reminders page. In fact, I removed all of the traditional assertions that verified elements since the visual checkpoints are simpler and add more coverage. The only traditional assertion I kept was for the page title, since that’s a data-oriented verification. Eyes tests can handle both functional and visual testing!

Fewer locators means less risk of breakage, and Execution Cloud’s self-healing capabilities should take care of any lingering locator problems. Furthermore, if I wanted to add any more tests, then I already have all the fixtures ready, so test case code should be fairly concise.

Running the Eyes test

I ran the test one more time with the same command. This time, Applitools treated it as an Eyes test, and the Eyes Test Manager showed the visual snapshots along with all the Execution Cloud artifacts:

Test results for an Eyes tests run with both Execution Cloud and Ultrafast Grid.

Execution Cloud worked together great with Ultrafast Grid!

Taking the next steps

Applitools Execution Cloud is a very cool new platform for running web UI tests. As an engineer, what I like about it most is that it provides AI-powered self-healing capabilities to locators without requiring me to change my test cases. I can make the same, standard Selenium WebDriver calls as I’ve always coded. I don’t need to rewrite my interactions, and I don’t need to use a low-code/no-code platform to get self-healing locators. Even though Execution Cloud supports only Selenium WebDriver for now, there are plans to add support for other open source test frameworks (like Cypress) in the future.

If you want to give Execution Cloud a try, all you need to do is register a free Applitools account and request access! Then, take one of our Selenium WebDriver tutorials – they’ve all been updated with Execution Cloud support.

The post Add self-healing to your Selenium tests with Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
3 Reasons to Attend Front-End Test Fest 2023 https://applitools.com/blog/3-reasons-to-attend-front-end-test-fest-2023/ Thu, 01 Jun 2023 01:21:34 +0000 https://applitools.com/?p=50710 Hey there, automation testing enthusiasts! Joe Colantonio here, founder of TestGuild and the author of the new book Automation Awesomeness: 260 actionable affirmations to improve your QA and automation testing...

The post 3 Reasons to Attend Front-End Test Fest 2023 appeared first on Automated Visual Testing | Applitools.

]]>
FETF Upcoming Events

Hey there, automation testing enthusiasts! Joe Colantonio here, founder of TestGuild and the author of the new book Automation Awesomeness: 260 actionable affirmations to improve your QA and automation testing skills. I’m thrilled to bring you an incredible opportunity to level up your front-end testing game. Front-End Test Fest 2023 is just around the corner, and I’m here to share with you the top three reasons why you absolutely cannot miss this event!

Reason 1: Unleash Your Front-End Testing Superpowers

Front-End Test Fest, happening on June 7, 2023, is a one-day virtual event that promises to unlock the full potential of your front-end testing skills. In an ever-changing landscape, where front-end testing is evolving rapidly, this event will provide you with the latest trends, strategies, and practical tips from industry experts. From UI/UX design to component testing and AI-powered techniques, Front-End Test Fest covers it all. Discover how front-end testing is changing and equip yourself with the tools to become a front-end testing superhero!

See the full Front-End Test Fest program

Reason 2: Learn from Industry Experts

One of the most exciting aspects of Front-End Test Fest is the incredible lineup of industry experts who will be sharing their wisdom. We’ve gathered thought leaders like Filip Hric, Ramona Schwering, Colby Fayock, Andrew Knight, Jason Lengstorf, and more. These experts will dive deep into topics such as testing like a developer, self-healing tests, and the power of AI-accelerated release pipelines. Their knowledge and experience will empower you to overcome testing challenges and deliver high-quality front-end experiences in this rapidly evolving landscape.

Reason 3: Connect and Expand Your Professional Network

Front-End Test Fest is not just about learning; it’s also an opportunity to connect with a vibrant community of automation testing enthusiasts. Engage in live Q&A sessions, participate in interactive discussions, and build valuable relationships with professionals who share your passion for test automation. The networking opportunities available during the event will expand your horizons and provide a platform for collaboration and growth.

Front-End Test Fest 2023 is the ultimate event for unleashing the power of automation testing in front-end development. I’m co-hosting the event with Bekah Hawrot Weigel, so I want to personally invite you to join us on June 7, 2023. Don’t miss this free opportunity to level up your front-end testing superpowers, connect with a vibrant community, and move ahead in this ever-evolving field. Mark your calendars and grab your virtual seat. I look forward to seeing you there!

The post 3 Reasons to Attend Front-End Test Fest 2023 appeared first on Automated Visual Testing | Applitools.

]]>