Web Application & REST API Integration Plugin
The plugin provides the integration between web application testing functionality and REST API features.
Installation
-
Copy the below line to
dependencies
section of the projectbuild.gradle
filePlease make sure to use the same version for all VIVIDUS dependencies. Example 1. build.gradleimplementation(group: 'org.vividus', name: 'vividus-plugin-web-app-to-rest-api', version: '0.6.6')
-
If the project was imported to the IDE before adding new dependency, re-generate the configuration files for the used IDE and then refresh the project in the used IDE.
Table Transformers
FROM_SITEMAP
FROM_SITEMAP
transformer generates table based on the website sitemap.
The use of web-application.main-page-url property for setting of main page for crawling is deprecated and will be removed in VIVIDUS 0.7.0, pelase see either mainPageUrl transformer parameter or transformer.from-sitemap.main-page-url property.
|
Parameter | Description | ||
---|---|---|---|
|
main application page URL, used as initial seed URL that is fetched by the crawler to extract new URLs in it and follow them for crawling.
|
||
|
relative URL of |
||
|
ignore sitemap parsing errors (true or false) |
||
|
the column name in the generated table |
Property Name | Acceptable values | Default | Description | ||
---|---|---|---|---|---|
|
|
main application page URL, used as initial seed URL that is fetched by the crawler to extract new URLs in it and follow them for crawling.
|
|||
|
|
|
ignore sitemap parsing errors |
||
|
|
|
defines whether urls that has redirect to the one that has already been included in the table are excluded from the table |
Examples:
{transformer=FROM_SITEMAP, siteMapRelativeUrl=/sitemap.xml, ignoreErrors=true, column=page-url}
FROM_HEADLESS_CRAWLING
FROM_HEADLESS_CRAWLING
transformer generates table based on the results of headless crawling.
The use of web-application.main-page-url property for setting of main page for crawling is deprecated and will be removed in VIVIDUS 0.7.0, pelase see either mainPageUrl transformer parameter or transformer.from-headless-crawling.main-page-url property.
|
Parameter Name | Description | ||
---|---|---|---|
|
main application page URL, used as initial seed URL that is fetched by the crawler to extract new URLs in it and follow them for crawling.
|
||
|
The column name in the generated table. |
Property Name | Acceptable values | Default | Description | ||
---|---|---|---|---|---|
General |
|||||
|
|
main application page URL, used as initial seed URL that is fetched by the crawler to extract new URLs in it and follow them for crawling.
|
|||
|
Comma-separated list of values |
List of relative URLs, a seed URL is a URL that is fetched by the crawler to extract new URLs in it and follow them for crawling. |
|||
|
Regular expression a |
|
The regular expression to match URLs. The crawler will not crawl all URLs that matching the given regular expression and they will not be added to the resulting table. URI fragments and URL query are ignored at filtering. |
||
|
Regular expression |
no default value |
The regular expression to match extensions in URLs. The crawler will ignore all URLs referring to files with extensions matching the given regular expression. |
||
|
|
|
Defines whether urls that has redirect to the one that has already been included in the table are excluded from the table. |
||
|
|
|
Socket timeout in milliseconds. |
||
|
|
|
Connection timeout in milliseconds. |
||
|
|
|
Max allowed size of a page in bytes. Pages larger than this size will not be fetched. |
||
|
|
|
Maximum connections per host. |
||
|
|
|
Maximum total connections. |
||
|
|
|
Whether to follow redirects. |
||
|
|
|
Maximum depth of crawling, for unlimited depth this parameter should be set to -1. |
||
|
|
|
Number of pages to fetch, for unlimited number of pages this parameter should be set to -1. |
||
|
|
|
Politeness delay in milliseconds between sending two requests to the same host. |
||
|
|
|
Max number of outgoing links which are processed from a page. |
||
|
|
|
Whether to honor links with nofollow flag. |
||
|
|
|
Whether to honor links with noindex flag. |
||
|
|
|
|||
|
|
|
Cookie policy as defined per cookie specification. |
||
|
|
|
Whether to consider single level domains valid (e.g. http://localhost). |
||
|
|
|
Whether to crawl https pages. |
||
|
Set of headers to set for every crawling request being sent.
|
||||
Proxy |
|||||
|
|
|
Proxy host. |
||
|
|
|
Proxy port. |
||
|
|
|
Username to authenticate with proxy. |
||
|
|
|
Password to authenticate with proxy. |
Examples:
{transformer=FROM_HEADLESS_CRAWLING, column=page-url}
FROM_HTML
FROM_HTML
transformer generates a table based on the text content, HTML content or attributes of HTML elements found in the requested HTML page.
Parameter Name | Description | ||||
---|---|---|---|---|---|
|
The URL of the page to build the table upon.
|
||||
|
The name of the variable containing source HTML, only variables of scopes
|
||||
|
The column name in the generated table. |
||||
|
The XPath selector to select HTML elements in the HTML page. By using XPath selector we can extract element’s HTML content, attributes and text content like its shown in the following example:
|
Property Name | Acceptable values | Default | Description |
---|---|---|---|
|
Set of headers to set when requesting the page.
|
<!DOCTYPE html>
<html>
<body>
<a href="/r">R</a>
<a href="/g">G</a>
<a href="/b">B</a>
</body>
</html>
Examples:
{transformer=FROM_HTML, column=relative-url, pageUrl=https://mypage.com, xpathSelector=//a/@href}
|relative-url|
|/r |
|/g |
|/b |
Steps
Resources validations
Steps to check resource availability using HTTP requests.
Resource validation statuses
Status | Description |
---|---|
|
An HTTP request to the resource returns a status code other than 200 OK. |
|
Reasons:
|
|
An HTTP request to the resource returns 200 OK status code. |
|
Reasons:
|
|
A resource validation has already been performed, i.e. if the same resource might be present on several pages so we do not need to validate it twice. |
Validate resources on web pages
Validates resources on web pages.
Resource validation logic:
-
If the
pages
row contains relative URL then it gets resolved against URL inweb-application.main-page-url
property, i.e. if the main page URL ishttps://elderscrolls.bethesda.net/
and relative URL is/skyrim10
the resulting URL will behttps://elderscrolls.bethesda.net/skyrim10
-
Collect elements by the CSS selector from each page
-
Get either
href
orsrc
attribute value from each element, if neither of the attributes exists the validation fails -
For each received value execute HEAD request
-
If the status code is 200 OK then the resource validation is considered as passed
-
If the status code is one of 404 Not Found, 405 Method Not Allowed, 501 Not Implemented, 503 Service Unavailable then GET request will be executed
-
If the GET status code is 200 OK then the resource validation is considered as passed, otherwise failed
-
Then all resources found by $htmlLocatorType `$htmlLocator` are valid on:$pages
Deprecated syntax (will be removed in VIVIDUS 0.7.0):
Then all resources by selector `$cssSelector` are valid on:$pages
-
$htmlLocatorType
- The HTML locator type, eitherCSS selector
orXPath
. -
$htmlLocator
- The actual locator.-
$pages
- The pages to validate resources on.
-
Then all resources found by xpath `//a` are valid on:
|pages |
|https://vividus.org/ |
|/test-automation-made-awesome|
Validate resources from HTML
Validates resources from HTML document.
Resource validation logic:
-
Collects elements by the CSS selector from the specified HTML document
-
Get either
href
orsrc
attribute value from each element, if neither of the attributes exists the validation fails. If the element value contains relative URL then it gets resolved against URL inweb-application.main-page-url
property -
For each received value execute HEAD request
-
If the status code is 200 OK then the resource validation is considered as passed
-
If the status code is one of 404 Not Found, 405 Method Not Allowed, 501 Not Implemented, 503 Service Unavailable then GET request will be executed
-
If the GET status code is 200 OK then the resource validation is considered as passed, otherwise failed
-
Then all resources found by $htmlLocatorType `$htmlLocator` in $html are valid
Deprecated syntax (will be removed in VIVIDUS 0.7.0):
Then all resources by selector `$cssSelector` from $html are valid
-
$htmlLocatorType
- The HTML locator type, eitherCSS selector
orXPath
. -
$htmlLocator
- The actual locator.-
$html
- HTML document to validate.
-
Then all resources found by CSS selector `a,img` in ${source-code} are valid
Validate redirects
Check that all URLs from ExamplesTable redirect to proper pages with correct redirects number. Validation fails if either actual final URL or number of redirects do not match the expected values.
The step throws the error in case if HTTP response status code of checked URL out of range 200-207. |
Then I validate HTTP redirects: $expectedRedirects
-
$expectedRedirects
- The ExamplesTable with redirect parameters containing the following columns:-
startUrl
- The URL from which redirection starts. -
endUrl
- The expected final URL to redirect to. -
redirectsNumber
- The expected number of redirects betweenstartUrl
andendUrl
(optional).
-
Then I validate HTTP redirects:
|startUrl |endUrl |redirectsNumber |
|http://example.com/redirect |http://example.com/get-response |1 |
Validate SSL rating
Then SSL rating for URL `$url` is $comparisonRule `$gradeName`
-
$url
- The URL for SSL scanning and grading. -
$comparisonRule
- The comparison rule. -
$gradeName
- The name of grade. The possible values:A+
,A
,A-
,B
,C
,D
,E
,F
,T
,M
.
Property Name | Acceptable values | Default | Description |
---|---|---|---|
|
|
SSL Labs endpoint. |
https://www.google.com
Then SSL rating for URL `https://www.google.com` is equal to `B`