DC/OS E2E Python Library¶
DC/OS E2E is a tool for spinning up and managing DC/OS clusters in test environments.
It includes a library which is focused on helping you to write tests which require DC/OS clusters.
See the CLI documentation for information on CLI tools built with DC/OS E2E.
Installing DC/OS E2E¶
Requires Python 3.5.2+. To avoid interfering with your system’s Python, we recommend using a virtualenv.
Check the Python version:
python3 --version
On Fedora, install Python development requirements:
sudo dnf install -y git python3-devel
On Ubuntu, install Python development requirements:
apt install -y gcc python3-dev
If you are not in a virtualenv, you may have to use sudo
before the following command, or --user
after install
.
pip3 install --upgrade git+https://github.com/dcos/dcos-e2e.git@2018.11.09.1
Getting Started¶
To create a DC/OS Cluster
, you need a backend.
Backends are customizable, but for now let’s use a standard Docker backend.
Each backend has different system requirements.
See the Docker backend documentation for details of what is needed for the Docker backend.
from dcos_e2e.backends import Docker
from dcos_e2e.cluster import Cluster
cluster = Cluster(cluster_backend=Docker())
It is also possible to use Cluster
as a context manager.
Doing this means that the cluster is destroyed on exit.
To install DC/OS on a cluster, you need a DC/OS build artifact.
You can download one from the DC/OS releases page.
In this example we will use a open source DC/OS artifact downloaded to /tmp/dcos_generate_config.sh
.
from pathlib import Path
oss_artifact = Path('/tmp/dcos_generate_config.sh')
cluster.install_dcos_from_path(
build_artifact=oss_artifact,
dcos_config={
**cluster.base_config,
**{
'resolvers': ['8.8.8.8'],
},
}
ip_detect_path=Docker().ip_detect_path,
)
cluster.wait_for_dcos_oss()
With a Cluster
you can then run commands on arbitrary Node
s.
for master in cluster.masters:
result = master.run(args=['echo', '1'])
print(result.stdout)
There is much more that you can do with Cluster
s and Node
s, and there are other ways to create a cluster.
Installing DC/OS E2E¶
Requires Python 3.5.2+. To avoid interfering with your system’s Python, we recommend using a virtualenv.
Check the Python version:
python3 --version
On Fedora, install Python development requirements:
sudo dnf install -y git python3-devel
On Ubuntu, install Python development requirements:
apt install -y gcc python3-dev
If you are not in a virtualenv, you may have to use sudo
before the following command, or --user
after install
.
pip3 install --upgrade git+https://github.com/dcos/dcos-e2e.git@2018.11.09.1
Getting Started¶
To create a DC/OS Cluster
, you need a backend.
Backends are customizable, but for now let’s use a standard Docker backend.
Each backend has different system requirements.
See the Docker backend documentation for details of what is needed for the Docker backend.
from dcos_e2e.backends import Docker
from dcos_e2e.cluster import Cluster
cluster = Cluster(cluster_backend=Docker())
It is also possible to use Cluster
as a context manager.
Doing this means that the cluster is destroyed on exit.
To install DC/OS on a cluster, you need a DC/OS build artifact.
You can download one from the DC/OS releases page.
In this example we will use a open source DC/OS artifact downloaded to /tmp/dcos_generate_config.sh
.
from pathlib import Path
oss_artifact = Path('/tmp/dcos_generate_config.sh')
cluster.install_dcos_from_path(
build_artifact=oss_artifact,
dcos_config={
**cluster.base_config,
**{
'resolvers': ['8.8.8.8'],
},
}
ip_detect_path=Docker().ip_detect_path,
)
cluster.wait_for_dcos_oss()
With a Cluster
you can then run commands on arbitrary Node
s.
for master in cluster.masters:
result = master.run(args=['echo', '1'])
print(result.stdout)
There is much more that you can do with Cluster
s and Node
s, and there are other ways to create a cluster.
The Cluster
class¶
Using DC/OS E2E usually involves creating one or more Cluster
s.
A cluster is created using a “backend”, which might be Docker or a cloud provider for example.
It is also possible to point DC/OS E2E to existing nodes.
A Cluster
object is then used to interact with the DC/OS cluster.
-
class
dcos_e2e.cluster.
Cluster
(cluster_backend, masters=1, agents=1, public_agents=1)¶ Create a DC/OS cluster.
Parameters: - cluster_backend – The backend to use for the cluster.
- masters – The number of master nodes to create.
- agents – The number of agent nodes to create.
- public_agents – The number of public agent nodes to create.
Choosing a Backend¶
See Backends for a backend to use for cluster_backend
.
Creating a Cluster
from Existing Node
s¶
It is possible to create a Cluster
from existing nodes.
Cluster
s created with this method cannot be destroyed by DC/OS E2E.
It is assumed that DC/OS is already up and running on the given Node
s and installing DC/OS is not supported.
-
classmethod
Cluster.
from_nodes
(masters, agents, public_agents)¶ Create a cluster from existing nodes.
Parameters: - masters – The master nodes in an existing cluster.
- agents – The agent nodes in an existing cluster.
- public_agents – The public agent nodes in an existing cluster.
Return type: Returns: A cluster object with the nodes of an existing cluster.
Installing DC/OS¶
Some backends support installing DC/OS from a path to a build artifact. Some backends support installing DC/OS from a URL pointing to a build artifact. See how to use DC/OS Enterprise with DC/OS E2E.
-
Cluster.
install_dcos_from_path
(build_artifact, dcos_config, ip_detect_path, files_to_copy_to_genconf_dir=(), output=<Output.CAPTURE: 2>)¶ Parameters: - build_artifact – The Path to a build artifact to install DC/OS from.
- dcos_config – The DC/OS configuration to use.
- ip_detect_path – The path to a
ip-detect
script that will be used when installing DC/OS. - files_to_copy_to_genconf_dir – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
- output – What happens with stdout and stderr.
Raises: NotImplementedError
– NotImplementedError because it is more efficient for the given backend to use the DC/OS advanced installation method that takes build artifacts by URL string.Return type:
-
Cluster.
install_dcos_from_url
(build_artifact, dcos_config, ip_detect_path, output=<Output.CAPTURE: 2>, files_to_copy_to_genconf_dir=())¶ Installs DC/OS using the DC/OS advanced installation method.
If supported by the cluster backend, this method spins up a persistent bootstrap host that supplies all dedicated DC/OS hosts with the necessary installation files.
Since the bootstrap host is different from the host initiating the cluster creation passing the
build_artifact
via URL string saves the time of copying thebuild_artifact
to the bootstrap host.However, some backends may not support using a bootstrap node. For these backends, each node will download and extract the build artifact. This may be very slow, as the build artifact is downloaded to and extracted on each node, one at a time.
Parameters: - build_artifact – The URL string to a build artifact to install DC/OS from.
- dcos_config – The contents of the DC/OS
config.yaml
. - ip_detect_path – The path to a
ip-detect
script that will be used when installing DC/OS. - files_to_copy_to_genconf_dir – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
- output – What happens with stdout and stderr.
Return type:
Destroying a Cluster
¶
Cluster
s have a destroy()
method.
This can be called manually, or Cluster
s can be used as context managers.
In this case the cluster will be destroyed when exiting the context manager.
with Cluster(backend=Docker(), masters=3, agents=2):
pass
Waiting for DC/OS¶
Depending on the hardware and the backend, DC/OS can take some time to install.
The methods to wait for DC/OS repeatedly poll the cluster until services are up.
Choose the wait_for_dcos_oss()
or wait_for_dcos_ee()
as appropriate.
-
Cluster.
wait_for_dcos_oss
(http_checks=True)¶ Wait until the DC/OS OSS boot process has completed.
Parameters: http_checks – Whether or not to wait for checks which involve HTTP. If this is False, this function may return before DC/OS is fully ready. This is useful in cases where an HTTP connection cannot be made to the cluster. For example, this is useful on macOS without a VPN set up. Raises: dcos_e2e.exceptions.DCOSTimeoutError
– Raised if cluster components did not become ready within one hour.Return type: None
-
Cluster.
wait_for_dcos_ee
(superuser_username, superuser_password, http_checks=True)¶ Wait until the DC/OS Enterprise boot process has completed.
Parameters: - superuser_username – Username of the default superuser.
- superuser_password – Password of the default superuser.
- http_checks – Whether or not to wait for checks which involve HTTP. If this is False, this function may return before DC/OS is fully ready. This is useful in cases where an HTTP connection cannot be made to the cluster. For example, this is useful on macOS without a VPN set up.
Raises: dcos_e2e.exceptions.DCOSTimeoutError
– Raised if cluster components did not become ready within one hour.Return type:
Running Integration Tests¶
It is possible to easily run DC/OS integration tests on a cluster. See how to run tests on DC/OS Enterprise.
with Cluster(backend=Docker()):
cluster.run_integration_tests(pytest_command=['pytest', '-k', 'mesos'])
-
Cluster.
run_integration_tests
(pytest_command, env=None, output=<Output.CAPTURE: 2>, tty=False, test_host=None, transport=None)¶ Run integration tests on a random master node.
Parameters: - pytest_command – The
pytest
command to run on the node. - env – Environment variables to be set on the node before running
the pytest_command. On enterprise clusters,
DCOS_LOGIN_UNAME
andDCOS_LOGIN_PW
must be set. - output – What happens with stdout and stderr.
- test_host – The node to run the given command on. if not given, an arbitrary master node is used.
- tty – If
True
, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. This means that the values of stdout and stderr will not be in the returnedsubprocess.CompletedProcess
. - transport – The transport to use for communicating with nodes. If
None
, theNode
’sdefault_transport
is used.
Return type: Returns: The result of the
pytest
command.Raises: subprocess.CalledProcessError
– If thepytest
command fails.- pytest_command – The
Backends¶
DC/OS E2E comes with some backends and it is also possible to create custom backends.
Docker Backend¶
The Docker backend is used to spin up clusters on Docker containers, where each container is a DC/OS node.
Requirements¶
Docker 17.06+¶
Docker version 17.06 or later must be installed.
Plenty of memory must be given to Docker. On Docker for Mac, this can be done from Docker > Preferences > Advanced. This backend has been tested with a four node cluster with 9 GB memory given to Docker.
IP Routing Set Up for Docker¶
On macOS, hosts cannot connect to containers IP addresses by default. This is required, for example, to access the web UI, to SSH to nodes and to use the DC/OS CLI.
Once the CLI is installed, run dcos-docker setup-mac-network
to set up IP routing.
Without this, it is still possible to use some features.
In the library, specify transport
as dcos_e2e.node.Transport.DOCKER_EXEC
.
In the CLI, specify the --transport
and --skip-http-checks
options where available.
Operating System¶
This tool has been tested on macOS with Docker for Mac and on Linux.
It has also been tested on Windows on Vagrant.
The only supported way to use the Docker backend on Windows is using Vagrant and VirtualBox.
- Ensure Virtualization and VT-X support is enabled in your PC’s BIOS. Disable Hyper-V virtualization. See https://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/.
- Install VirtualBox and VirtualBox Extension Pack.
- Install Vagrant.
- Install the Vagrant plugin for persistent disks:
vagrant plugin install vagrant-persistent-storage
- Optionally install the Vagrant plugins to cache package downloads and keep guest additions updates:
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-vbguest
- Start Powershell and download the DC/OS E2E
Vagrantfile
to a directory containing a DC/OS installer file:
((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/|github-owner|/|github-repository|/master/vagrant/Vagrantfile')) | Set-Content -LiteralPath Vagrantfile
- By default, the
Vagrantfile
installs DC/OS E2E from the most recent release at the time it is downloaded. To use a different release, or any Git reference, set the environment variableDCOS_E2E_REF
:
$env:DCOS_E2E_REF = "master"
- Start the virtual machine and login:
vagrant up
vagrant ssh
You can now run dcos-docker
commands or use the library.
To connect to the cluster nodes from the Windows host (e.g. to use the DC/OS web interface), in PowerShell Run as Administrator, and add the Virtual Machine as a gateway:
route add 172.17.0.0 MASK 255.255.0.0 192.168.18.2
To shutdown, logout of the virtual machine shell, and destroy the virtual machine and disk:
vagrant destroy
The route will be removed on reboot. You can manually remove the route in PowerShell Run as Administrator using:
route delete 172.17.0.0
doctor
command¶
DC/OS E2E comes with the dcos-docker doctor
command.
Run this command to check your system for common causes of problems.
DC/OS Installation¶
Cluster
s created by the Docker backend only support installing DC/OS via install_dcos_from_path()
.
Node
s of Cluster
s created by the Docker backend do not distinguish between public_ip_address
and private_ip_address
.
Limitations¶
Docker does not represent a real DC/OS environment with complete accuracy. This section describes the currently known differences between the Docker backend and a real DC/OS environment.
SELinux¶
Tests inherit the host’s environment. Any tests that rely on SELinux being available require it be available on the host.
Storage¶
Docker does not support storage features expected in a real DC/OS environment.
Troubleshooting¶
Cleaning Up and Fixing “Out of Space” Errors¶
If a test is interrupted, it can leave behind containers, volumes and files. To remove these, run the following:
dcos-docker clean
macOS File Sharing¶
On macOS /tmp
is a symlink to /private/tmp
.
/tmp
is used by the harness.
Docker for Mac must be configured to allow /private
to be bind mounted into Docker containers.
This is the default.
See Docker > Preferences > File Sharing.
Clock sync errors¶
On various platforms, the clock can get out of sync between the host machine and Docker containers.
This is particularly problematic if using check_time: true
in the DC/OS configuration.
To work around this, run docker run --rm --privileged alpine hwclock -s
.
Reference¶
-
class
dcos_e2e.backends.
Docker
(workspace_dir=None, custom_container_mounts=None, custom_master_mounts=None, custom_agent_mounts=None, custom_public_agent_mounts=None, linux_distribution=<Distribution.CENTOS_7: 1>, docker_version=<DockerVersion.v1_13_1: 2>, storage_driver=None, docker_container_labels=None, docker_master_labels=None, docker_agent_labels=None, docker_public_agent_labels=None, transport=<Transport.DOCKER_EXEC: 2>, network=None, one_master_host_port_map=None)¶ Create a configuration for a Docker cluster backend.
Parameters: - workspace_dir – The directory in which large temporary files will be
created. These files will be deleted when the cluster is
destroyed.
This is equivalent to dir in
tempfile.mkstemp()
. - custom_container_mounts – Custom mounts add to all node containers. See mounts in Containers.run.
- custom_master_mounts – Custom mounts add to master node containers. See mounts in Containers.run.
- custom_agent_mounts – Custom mounts add to agent node containers. See mounts in Containers.run.
- custom_public_agent_mounts – Custom mounts add to public agent node containers. See mounts in Containers.run.
- linux_distribution – The Linux distribution to boot DC/OS on.
- docker_version – The Docker version to install on the cluster nodes.
- storage_driver – The storage driver to use for Docker on the
cluster nodes. By default, this is the host’s storage driver.
If this is not one of
aufs
,overlay
oroverlay2
,aufs
is used. - docker_container_labels – Docker labels to add to the cluster node containers. Akin to the dictionary option in Containers.run.
- docker_master_labels – Docker labels to add to the cluster master node containers. Akin to the dictionary option in Containers.run.
- docker_agent_labels – Docker labels to add to the cluster agent node containers. Akin to the dictionary option in Containers.run.
- docker_public_agent_labels – Docker labels to add to the cluster public agent node containers. Akin to the dictionary option in Containers.run.
- transport – The transport to use for communicating with nodes.
- network – The Docker network containers will be connected to. If no
network is specified the
docker0
bridge network is used. It may not be possible to SSH to containers on a custom network on macOS. - one_master_host_port_map – The exposed host ports for one of the master nodes. This is useful on macOS on which the container IP is not directly accessible from the host. By exposing the host ports, the user can reach the services on the master node using the mapped host ports. The host port map will be applied to one master only if there are multiple master nodes. See ports in Containers.run. Currently, only Transmission Control Protocol is supported.
-
workspace_dir
¶ The directory in which large temporary files will be created. These files will be deleted at the end of a test run.
-
custom_container_mounts
¶ Custom mounts add to all node containers. See mounts in Containers.run.
-
custom_master_mounts
¶ Custom mounts add to master node containers. See mounts in Containers.run.
-
custom_agent_mounts
¶ Custom mounts add to agent node containers. See mounts in Containers.run.
-
custom_public_agent_mounts
¶ Custom mounts add to public agent node containers. See mounts in Containers.run.
-
linux_distribution
¶ The Linux distribution to boot DC/OS on.
-
docker_version
¶ The Docker version to install on the cluster nodes.
-
docker_storage_driver
¶ The storage driver to use for Docker on the cluster nodes.
-
docker_container_labels
¶ Docker labels to add to the cluster node containers. Akin to the dictionary option in Containers.run.
-
docker_master_labels
¶ Docker labels to add to the cluster master node containers. Akin to the dictionary option in Containers.run.
-
docker_agent_labels
¶ Docker labels to add to the cluster agent node containers. Akin to the dictionary option in Containers.run.
-
docker_public_agent_labels
¶ Docker labels to add to the cluster public agent node containers. Akin to the dictionary option in Containers.run.
-
transport
¶ The transport to use for communicating with nodes.
-
network
¶ The Docker network containers will be connected to. If no network is specified the
docker0
bridge network is used. It may not be possible to SSH to containers on a custom network on macOS.
-
one_master_host_port_map
¶ The exposed host ports for one of the master nodes. This is useful on macOS on which the container IP is not directly accessible from the host. By exposing the host ports, the user can reach the services on the master node using the mapped host ports. The host port map will be applied to one master only if there are multiple master nodes. See ports in Containers.run. Currently, only Transmission Control Protocol is supported.
-
container_name_prefix
¶ The prefix that all container names will start with. This is useful, for example, for later finding all containers started with this backend.
- workspace_dir – The directory in which large temporary files will be
created. These files will be deleted when the cluster is
destroyed.
This is equivalent to dir in
AWS Backend¶
The AWS backend is used to spin up clusters using EC2 instances on Amazon Web Services, where each instance is a DC/OS node.
Requirements¶
Amazon Web Services¶
An Amazon Web Services account with sufficient funds must be available.
The AWS credentials for the account must be present either in the environment as environment variables or in the default file system location under ~/.aws/credentials
with a AWS profile in the environment referencing those credentials.
The Mesosphere internal AWS tool maws automatically stores account specific temporary AWS credentials in the default file system location and exports the corresponding profile into the environment. After logging in with maws clusters can be launched using the AWS backend.
For CI deployments long lived credentials are preferred. It is recommended to use the environment variables method for AWS credentials in that case.
The environment variables are set as follows:
export AWS_ACCESS_KEY_ID=<aws_access_key_id> export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
The EC2 instances launched by the AWS backend will bring about costs in the order of 24 ct per instance, assuming the fixed cluster lifetime of two hours and m4.large
EC2 instances.
ssh
¶
The ssh
command must be available.
Operating System¶
The AWS backend has been tested on macOS and on Linux.
It is not expected that it will work out of the box with Windows, see issue QUALITY-1771.
If your operating system is not supported, it may be possible to use Vagrant, or another Linux virtual machine.
doctor
command¶
DC/OS E2E comes with the dcos-aws doctor
command.
Run this command to check your system for common causes of problems.
DC/OS Installation¶
Cluster
s created by the AWS
backend only support installing DC/OS via install_dcos_from_url()
.
This is because the installation method employs a bootstrap node that directly downloads the build_artifact
from the specified URL.
Node
s of Cluster
s created by the AWS
backend distinguish between public_ip_address
and private_ip_address
.
The private_ip_address
refers to the internal network of the AWS stack which is also used by DC/OS internally.
The public_ip_address
allows for reaching AWS EC2 instances from the outside e.g. from the dcos-e2e
testing environment.
AWS Regions¶
When launching a cluster with Amazon Web Services there are a number of different regions to choose from where the cluster is launched using aws_region
.
It is recommended to use us-west-1
or us-west-2
to keep the cost low.
See the AWS Regions and Availability Zones for available regions.
Restricting access to the cluster¶
The AWS backend takes a parameter admin_location
.
This parameter restricts the access to the AWS stack from the outside to a particular IP address range.
The default value '0.0.0.0/0'
will allow accessing the cluster from anywhere.
It is recommended to restrict the address range to a subnet including the public IP of the machine executing tests with the AWS backend.
For example <external-ip>/24
.
Accessing cluster nodes¶
SSH can be used to access cluster nodes for the purpose of debugging if workspace_dir
is set.
The AWS backend generates a SSH key file id_rsa
in a cluster-specific sub-directory under the workspace_dir
directory. The sub-directory is named after the unique cluster ID generated during cluster creation. The cluster ID is prefixed with dcos-e2e-
and can be found through the DC/OS UI in the upper left corner or through the CCM UI when using maws with a Mesosphere AWS account.
Adding this key to the ssh-agent
or manually providing it via the -i
flag after changing its file permissions to 400
will allow for connecting to the cluster via the ssh
command.
The SSH user depends on the linux_distribution
given to the AWS
backend.
For CENTOS_7
that is centos
.
It is important to keep in mind files in the given workspace_dir
are temporary and are removed when the cluster is destroyed.
If workspace_dir
is unset the AWS
backend will create a new temporary directory in an operating system specific location.
Cluster lifetime¶
The cluster lifetime is fixed at two hours.
If the cluster was launched with maws (Mesosphere temporary AWS credentials) the cluster can be controlled via CCM. This allows for extending the cluster lifetime and also for cleaning up the cluster if anything goes wrong.
EC2 instance types¶
By default the AWS backend launches m4.large
instances for all DC/OS nodes.
It is possible to choose a different instance type through the aws_instance_type
parameter.
See the AWS Instance types for available instance types.
Unsupported DC/OS versions¶
The AWS backend does currently not support DC/OS versions below 1.10. Adding support for DC/OS 1.9 is tracked in issue DCOS-21960.
Unsupported features¶
The AWS backend does currently not support the Cluster
feature of copying files to the DC/OS installer by supplying files_to_copy_to_installer
.
The progress on this feature is tracked in issue DCOS-21894.
Troubleshooting¶
In case of an error during the DC/OS installation the journal from each node will be dumped and downloaded to the folder that the tests were executed in.
The logs are prefixed with the installation phase that failed, preflight
, deploy
or postflight
.
When using temporary credentials it is required to pay attention that the credentials are still valid or renewed when destroying a cluster. If the credentials are not valid anymore the AWS backend does not delete the public/private key pair generated during cluster creation. It is therefore recommended to periodically renew temporary AWS credentials when executing tests using the AWS backend.
In rare cases it might also happen that a AWS stack deployment fails with the message ROLLBACK_IN_PROGRESS
.
In that case at least one of the EC2 instances failed to come up. Starting a new cluster is the only option then.
Reference¶
-
class
dcos_e2e.backends.
AWS
(aws_instance_type='m4.large', aws_region='us-west-2', admin_location='0.0.0.0/0', linux_distribution=<Distribution.CENTOS_7: 1>, workspace_dir=None, aws_key_pair=None, aws_cloudformation_stack_name=None, ec2_instance_tags=None, master_ec2_instance_tags=None, agent_ec2_instance_tags=None, public_agent_ec2_instance_tags=None)¶ Create a configuration for an AWS cluster backend.
Parameters: - admin_location – The IP address range from which the AWS nodes can be accessed.
- aws_instance_type – The AWS instance type to use. See Instance types.
- aws_region – The AWS location to create nodes in. See Regions and Availability Zones.
- linux_distribution – The Linux distribution to boot DC/OS on.
- workspace_dir – The directory in which large temporary files will be
created. These files will be deleted at the end of a test run.
This is equivalent to dir in
tempfile.mkstemp()
. - aws_key_pair – An optional tuple of (name, path) where the name is the identifier of an existing SSH public key on AWS KeyPairs and the path is the local path to the corresponding private key. The private key can then be used to connect to the cluster. If this is not given, a new key pair will be generated.
- aws_cloudformation_stack_name – The name of the CloudFormation stack to create. If this is not given, a random string is used.
- ec2_instance_tags – Tags to add to the cluster node EC2 instances.
- master_ec2_instance_tags – Tags to add to the cluster master node EC2 instances.
- agent_ec2_instance_tags – EC2 tags to add to the cluster agent node EC2 instances.
- public_agent_ec2_instance_tags – EC2 tags to add to the cluster public agent node EC2 instances.
-
admin_location
¶ The IP address range from which the AWS nodes can be accessed.
-
aws_instance_type
¶ The AWS instance type to use. See Instance types.
-
aws_region
¶ The AWS location to create nodes in. See Regions and Availability Zones.
-
linux_distribution
¶ The Linux distribution to boot DC/OS on.
-
workspace_dir
¶ The directory in which large temporary files will be created. These files will be deleted at the end of a test run.
-
aws_key_pair
¶ An optional tuple of (name, path) where the name is the identifier of an existing SSH public key on AWS KeyPairs and the path is the local path to the corresponding private key. The private key can then be used to connect to the cluster.
-
aws_cloudformation_stack_name
¶ The name of the CloudFormation stack to create.
Tags to add to the cluster node EC2 instances.
Tags to add to the cluster master node EC2 instances.
EC2 tags to add to the cluster agent node EC2 instances.
EC2 tags to add to the cluster public agent node EC2 instances.
Raises: NotImplementedError
– In case an unsupported Linux distribution has been passed in at backend creation.
Vagrant Backend¶
The Vagrant backend is used to spin up clusters on Vagrant virtual machines, where each virtual machine is a DC/OS node.
Requirements¶
Hardware¶
A minimum of 2 GB of free memory is required per DC/OS node.
Vagrant by HashiCorp¶
Vagrant must be installed. This has been tested with:
- Vagrant 2.1.1
- Vagrant 2.1.2
Oracle VirtualBox¶
VirtualBox must be installed. This has been tested with VirtualBox 5.1.18.
vagrant-vbguest
plugin¶
vagrant-vbguest must be installed.
doctor
command¶
DC/OS E2E comes with the dcos-vagrant doctor
command.
Run this command to check your system for common causes of problems.
Reference¶
-
class
dcos_e2e.backends.
Vagrant
(virtualbox_description='', workspace_dir=None)¶ Create a configuration for a Vagrant cluster backend.
Parameters: - workspace_dir – The directory in which large temporary files will be
created. These files will be deleted at the end of a test run.
This is equivalent to dir in
tempfile.mkstemp()
. - virtualbox_description – A description string to add to VirtualBox VMs.
-
workspace_dir
¶ The directory in which large temporary files will be created. These files will be deleted at the end of a test run.
-
virtualbox_description
¶ A description string to add to VirtualBox VMs.
- workspace_dir – The directory in which large temporary files will be
created. These files will be deleted at the end of a test run.
This is equivalent to dir in
Custom Backends¶
DC/OS E2E supports pluggable backends. You may wish to create a new backend to support a new cloud provider for example.
How to Create a Custom Backend¶
To create a cluster Cluster backend, you need to create two classes.
You need to create a ClusterManager
and a ClusterBackend
.
A ClusterBackend
may take custom parameters and is useful for storing backend-specific options.
A ClusterManager
implements the nuts and bolts of cluster management for a particular backend.
This implements things like creating nodes and installing DC/OS on those nodes.
Please consider contributing your backend to this repository if it is stable and could be of value to a wider audience.
References¶
-
class
dcos_e2e.backends._base_classes.
ClusterBackend
¶ Cluster backend base class.
-
cluster_cls
¶ Return the
ClusterManager
class to use to create and manage a cluster.Return type: Type
[ClusterManager
]
-
-
class
dcos_e2e.backends._base_classes.
ClusterManager
(masters, agents, public_agents, cluster_backend)¶ Create a DC/OS cluster with the given
cluster_backend
.Parameters: - masters – The number of master nodes to create.
- agents – The number of agent nodes to create.
- public_agents – The number of public agent nodes to create.
- cluster_backend – Details of the specific DC/OS Docker backend to use.
-
install_dcos_from_url_with_bootstrap_node
(build_artifact, dcos_config, ip_detect_path, output, files_to_copy_to_genconf_dir)¶ Install DC/OS from a URL with a bootstrap node.
If a method which implements this abstract method raises a
NotImplementedError
, users of the backend can still install DC/OS from a URL in an inefficient manner.Parameters: - build_artifact – The URL string to a build artifact to install DC/OS from.
- dcos_config – The DC/OS configuration to use.
- ip_detect_path – The
ip-detect
script to use for installing DC/OS. - output – What happens with stdout and stderr.
- files_to_copy_to_genconf_dir – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
Return type:
-
install_dcos_from_path_with_bootstrap_node
(build_artifact, dcos_config, ip_detect_path, output, files_to_copy_to_genconf_dir)¶ Install DC/OS from a build artifact passed as a file system Path.
If a method which implements this abstract method raises a
NotImplementedError
, users of the backend can still install DC/OS from a path in an inefficient manner.Parameters: - build_artifact – The path to a build artifact to install DC/OS from.
- dcos_config – The DC/OS configuration to use.
- ip_detect_path – The
ip-detect
script to use for installing DC/OS. - output – What happens with stdout and stderr.
- files_to_copy_to_genconf_dir – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
Return type:
Cluster Node
s¶
Cluster
s are made of Node
s.
The Node
interface is backend agnostic.
Node
s are generally used to run commands.
Node
s are either manually constructed in order to create a from_nodes()
, or they are retrieved from an existing Cluster
.
-
class
dcos_e2e.node.
Node
(public_ip_address, private_ip_address, default_user, ssh_key_path, default_transport=<Transport.SSH: 1>)¶ Parameters: - public_ip_address – The public IP address of the node.
- private_ip_address – The IP address used by the DC/OS component running on this node.
- default_user – The default username to use for connections.
- ssh_key_path – The path to an SSH key which can be used to SSH to
the node as the
default_user
user. The file must only have permissions to be read by (and optionally written to) the owner. - default_transport – The transport to use for communicating with nodes.
-
public_ip_address
¶ The public IP address of the node.
-
private_ip_address
¶ The IP address used by the DC/OS component running on this node.
-
default_user
¶ The default username to use for connections.
-
default_transport
¶ The transport used to communicate with the node.
Running a Command on a Node¶
There are two methods used to run commands on Node
s.
run
and popen
are roughly equivalent to their subprocess
namesakes.
-
Node.
run
(args, user=None, output=<Output.CAPTURE: 2>, env=None, shell=False, tty=False, transport=None, sudo=False)¶ Run a command on this node the given user.
Parameters: - args – The command to run on the node.
- user – The username to communicate as. If
None
then thedefault_user
is used instead. - output – What happens with stdout and stderr.
- env – Environment variables to be set on the node before running the command. A mapping of environment variable names to values.
- shell – If
False
(the default), each argument is passed as a literal value to the command. If True, the command line is interpreted as a shell command, with a special meaning applied to some characters (e.g. $, &&, >). This means the caller must quote arguments if they may contain these special characters, including whitespace. - tty – If
True
, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. When using a TTY, different transports may use different line endings. - transport – The transport to use for communicating with nodes. If
None
, theNode
’sdefault_transport
is used. - sudo – Whether to use “sudo” to run commands.
Return type: Returns: The representation of the finished process.
Raises: subprocess.CalledProcessError
– The process exited with a non-zero code.
-
Node.
popen
(args, user=None, env=None, shell=False, transport=None)¶ Open a pipe to a command run on a node as the given user.
Parameters: - args – The command to run on the node.
- user – The user to open a pipe for a command for over.
If None the
default_user
is used instead. - env – Environment variables to be set on the node before running the command. A mapping of environment variable names to values.
- shell – If False (the default), each argument is passed as a literal value to the command. If True, the command line is interpreted as a shell command, with a special meaning applied to some characters (e.g. $, &&, >). This means the caller must quote arguments if they may contain these special characters, including whitespace.
- transport – The transport to use for communicating with nodes. If
None
, theNode
’sdefault_transport
is used.
Return type: Returns: The pipe object attached to the specified process.
Sending a File to a Node¶
-
Node.
send_file
(local_path, remote_path, user=None, transport=None, sudo=False)¶ Copy a file to this node.
Parameters: - local_path – The path on the host of the file to send.
- remote_path – The path on the node to place the file.
- user – The name of the remote user to send the file. If
None
, thedefault_user
is used instead. - transport – The transport to use for communicating with nodes. If
None
, theNode
’sdefault_transport
is used. - sudo – Whether to use sudo to create the directory which holds the remote file.
Return type:
Roles¶
Transports¶
Outputs¶
-
class
dcos_e2e.node.
Output
¶ Output capture options for running commands.
When using
LOG_AND_CAPTURE
, stdout and stderr are merged into stdout.-
LOG_AND_CAPTURE
¶ Log output at the debug level. If the code returns a
subprocess.CompletedProcess
, the stdout and stderr will be contained in the return value. However, they will be merged into stderr.
-
CAPTURE
¶ Capture stdout and stderr. If the code returns a
subprocess.CompletedProcess
, the stdout and stderr will be contained in the return value.
-
NO_CAPTURE
¶ Do not capture stdout or stderr.
-
LOG_AND_CAPTURE
= 1
-
CAPTURE
= 2
-
NO_CAPTURE
= 3
-
Using DC/OS Enterprise¶
DC/OS Enterprise requires various configuration variables which are not allowed or required by open source DC/OS.
The following example shows how to use DC/OS Enterprise with DC/OS E2E.
from pathlib import Path
from dcos_e2e.backends import Docker
from dcos_e2e.cluster import Cluster
from passlib.hash import sha512_crypt
ee_artifact = Path('/tmp/dcos_generate_config.ee.sh')
license_key_contents = Path('/tmp/license-key.txt').read_text()
superuser_username = 'my_username'
superuser_password = 'my_password'
extra_config = {
'superuser_username': superuser_username,
'superuser_password_hash': sha512_crypt.hash(superuser_password),
'fault_domain_enabled': False,
'license_key_contents': license_key_contents,
}
with Cluster(cluster_backend=Docker()) as cluster:
cluster.install_dcos_from_path(
build_artifact=ee_artifact,
dcos_config={
**cluster.base_config,
**extra_config,
},
ip_detect_path=Docker().ip_detect_path,
)
cluster.wait_for_dcos_ee(
superuser_username=superuser_username,
superuser_password=superuser_password,
)
cluster.run_integration_tests(
env={
'DCOS_LOGIN_UNAME': superuser_username,
'DCOS_LOGIN_PW': superuser_password,
}
pytest_command=['pytest', '-k', 'tls'],
)
Linux Distributions¶
Some backends support multiple Linux distributions on nodes. Not all distributions are necessarily fully supported by DC/OS. See particular backend configuration classes for options.
Exceptions¶
The following custom exceptions are defined in DC/OS E2E.
-
class
dcos_e2e.exceptions.
DCOSTimeoutError
¶ Raised if DC/OS does not become ready within a given time boundary.
Docker Versions¶
Some backends support multiple Docker versions on nodes. Not all Docker versions are necessarily fully supported by DC/OS. See particular backend configuration classes for options.
Docker Storage Drivers¶
Some backends support multiple Docker storage drivers nodes. Not all distributions are necessarily fully supported by DC/OS. See particular backend configuration classes for options.
Changelog¶
Contents
- Changelog
- Next
- 2018.11.09.1
- 2018.11.09.0
- 2018.11.07.1
- 2018.11.07.0
- 2018.10.17.1
- 2018.10.17.0
- 2018.10.16.0
- 2018.10.13.0
- 2018.10.12.2
- 2018.10.12.1
- 2018.10.12.0
- 2018.10.11.3
- 2018.10.11.2
- 2018.10.11.1
- 2018.10.11.0
- 2018.10.10.0
- 2018.09.25.0
- 2018.09.06.0
- 2018.08.31.0
- 2018.08.28.0
- 2018.08.22.0
- 2018.08.13.0
- 2018.08.03.0
- 2018.07.31.0
- 2018.07.30.0
- 2018.07.27.0
- 2018.07.25.0
- 2018.07.23.1
- 2018.07.23.0
- 2018.07.22.1
- 2018.07.22.0
- 2018.07.16.0
- 2018.07.15.0
- 2018.07.10.0
- 2018.07.03.5
- 2018.07.03.0
- 2018.07.01.0
- 2018.06.30.0
- 2018.06.28.2
- 2018.06.28.0
- 2018.06.26.0
- 2018.06.20.0
- 2018.06.18.0
- 2018.06.14.1
- 2018.06.14.0
- 2018.06.12.1
- 2018.06.12.0
- 2018.06.05.0
- 2018.05.29.0
- 2018.05.24.2
- 2018.05.24.1
- 2018.05.21.0
- 2018.05.17.0
- 2018.05.15.0
- 2018.05.14.0
- 2018.05.10.0
- 2018.05.02.0
- 2018.04.30.2
- 2018.04.29.0
- 2018.04.25.0
- 2018.04.19.0
- 2018.04.11.0
- 2018.04.02.1
- 2018.04.02.0
- 2018.03.26.0
- 2018.03.07.0
- 2018.02.28.0
- 2018.02.27.0
- 2018.02.23.0
- 2018.01.25.0
- 2018.01.22.0
- 2017.12.11.0
- 2017.12.08.0
- 2017.11.29.0
- 2017.11.21.0
- 2017.11.15.0
- 2017.11.14.0
- 2017.11.02.0
- 2017.10.04.0
- 2017.08.11.0
- 2017.08.08.0
- 2017.08.05.0
- 2017.06.23.0
- 2017.06.22.0
- 2017.06.21.1
- 2017.06.21.0
- 2017.06.20.0
- 2017.06.19.0
- 2017.06.15.0
2018.11.09.1¶
- Backwards incompatible change: Change
--no-test-env
to--test-env
onrun
commands, with the opposite default.
2018.11.09.0¶
- Fix an issue which caused incompatible version errors between
keyring
andSecretStore
dependencies.
2018.11.07.0¶
- Add
dcos-docker create-loopback-sidecar
anddcos-docker destroy-loopback-sidecar
commands to provide unformatted block devices to DC/OS. - Add
dcos-docker clean
command to clean left over artifacts. - Backwards incompatible change: Changed names of VPN containers on macOS.
2018.10.17.0¶
- Fix an issue which stopped the SSH transport from working on CLIs.
2018.10.16.0¶
- Remove
log_output_live
parameters on various functions in favor of newoutput
options. Node.__init__
’sssh_key_path
parameter now expects a path to an SSH key file with specific permissions.- See the documentation for this class for details.
2018.10.12.0¶
- The
docker-exec
transport uses interactive mode only when running in a terminal.
2018.10.11.0¶
- Show full path on
download-artifact
downloads. - Default to downloading to the current directory for
download-artifact
downloads. - Use a TTY on CLI run commands only if Stdin is a TTY.
2018.10.10.0¶
- Fix issues which stopped pre-built Linux binaries from working.
2018.09.25.0¶
wait_for_dcos_oss
andwait_for_dcos_ee
now raise a customDCOSTimeoutError
if DC/OS has not started within one hour.
2018.09.06.0¶
- The
--variant
option is now required for thedcos-aws
CLI. - Added the ability to install on Linux from a pre-built binary.
- Add the ability to do a release to a fork.
2018.08.31.0¶
- Fix using macOS with no custom network.
2018.08.28.0¶
- Support for CoreOS on the AWS backend.
- Fix an issue which prevented the Vagrant backend from working.
2018.08.22.0¶
- Improve diagnostics when creating a Docker-backed cluster with no running Docker daemon.
2018.08.13.0¶
- Add instructions for uninstalling DC/OS E2E.
2018.08.03.0¶
- Pin
msrestazure
pip dependency to specific version to avoid dependency conflict.
2018.07.31.0¶
- Add a
dcos-docker doctor
check that relevant Docker images can be built.
2018.07.30.0¶
- Add Red Hat Enterprise Linux 7.4 support to the AWS backend.
2018.07.27.0¶
- Fix bug which meant that a user could not log in after
dcos-docker wait
on DC/OS Open Source clusters. - Backwards incompatible change: Remove
files_to_copy_to_installer
fromCluster.__init__
and addfiles_to_copy_to_genconf_dir
as an argument toCluster.install_dcos_from_path
as well asCluster.install_dcos_from_url
. - Add
files_to_copy_to_genconf_dir
as an argument toNode.install_dcos_from_path
andNode.install_dcos_from_url
.
2018.07.25.0¶
- Add the capability of sending a directory to a
Node
viaNode.send_file
. - Add
ip_detect_path
to the eachClusterBackend
as a property and to each install DC/OS function as a parameter.
2018.07.23.0¶
- Add an initial
dcos-aws
CLI.
2018.07.22.1¶
- Add
dcos-docker download-artifact
anddcos-vagrant download-artifact
.
2018.07.22.0¶
- Add
verbose
option to multiple commands.
2018.07.16.0¶
- Add
virtualbox_description
parameter to theVagrant
backend. - Change the default transport for the Docker backend to
DOCKER_EXEC
.
2018.07.15.0¶
- Add a
--one-master-host-port-map
option todcos-docker create
.
2018.07.10.0¶
- Execute
node-poststart
checks inCluster.wait_for_dcos
andCluster.wait_for_dcos_ee
. - Add
dcos-vagrant doctor
checks.
2018.07.03.5¶
- Add a
--network
option to thedcos-docker
CLI.
2018.07.03.0¶
- Add a
dcos-vagrant
CLI.
2018.07.01.0¶
- Renamed Homebrew formula. To upgrade from a previous version, follow Homebrew’s linking instructions after upgrade instructions.
2018.06.30.0¶
- Add a
Vagrant
backend.
2018.06.28.2¶
- Add a
aws_instance_type
parameter to theAWS
backend.
2018.06.28.0¶
- Compare
Node
objects based on thepublic_ip_address
andprivate_ip_address
.
2018.06.26.0¶
- Add a
network
parameter to theDocker
backend.
2018.06.20.0¶
- Add platform-independent DC/OS installation method from
Path
and URL onNode
.
2018.06.18.0¶
- Add
dcos-docker doctor
check for a version conflict between systemd and Docker. - Allow installing DC/OS by a URL on the Docker backend, and a cluster
from_nodes
.
2018.06.14.1¶
- Add
Cluster.remove_node
.
2018.06.14.0¶
- Add Ubuntu support to the Docker backend.
- Add
aws_key_pair
parameter to the AWS backend. - Fix Linuxbrew installation on Ubuntu.
2018.06.12.1¶
- Add a
--wait
flag todcos-docker create
to also wait for the cluster.
2018.06.12.0¶
dcos-docker create
now creates clusters with the--cluster-id
“default” by default.
2018.06.05.0¶
- Change
Node.default_ssh_user
toNode.default_user
. - Add a
docker exec
transport toNode
operations. - Add a
--transport
options to multipledcos-docker
commands.
2018.05.29.0¶
- Do not pin
setuptools
to an exact version.
2018.05.24.2¶
- Add
--env
option todcos-docker run
.
2018.05.24.1¶
- Make
xfs_info
available on nodes, meaning that preflight checks can be run on nodes with XFS. - Fix
dcos-docker doctor
for cases wheredf
produces very long results.
2018.05.21.0¶
- Show a formatted error rather than a traceback if Docker cannot be connected to.
- Custom backends’ must now implement a
base_config
method. - Custom backends’ installation methods must now take
dcos_config
rather thanextra_config
. Cluster.install_dcos_from_url
andCluster.install_dcos_from_path
now takedcos_config
rather thanextra_config
.
2018.05.17.0¶
- Add a
--variant
option todcos-docker create
to speed up cluster creation.
2018.05.15.0¶
- Add a
test_host
parameter toCluster.run_integration_tests
. - Add the ability to specify a node to use for
dcos-docker run
.
2018.05.14.0¶
- Show IP address in
dcos-docker inspect
.
2018.05.10.0¶
- Expose the SSH key location in
dcos-docker inspect
. - Make network created by
setup-mac-network
now survives restarts.
2018.05.02.0¶
- Previously not all volumes were destroyed when destroying a cluster from the CLI or with the
Docker
backend. This has been resolved. To remove dangling volumes from previous versions, usedocker volume prune
. - Backwards incompatible change:
mount
parameters toDocker.__init__
now take alist
ofdocker.types.Mount
s. - Docker version 17.06 or later is now required for the CLI and for the
Docker
backend.
2018.04.30.2¶
- Added
dcos-docker destroy-mac-network
command. - Added a
--force
parameter todcos-docker setup-mac-network
to override files and containers.
2018.04.29.0¶
- Added
dcos-docker setup-mac-network
command.
2018.04.25.0¶
- Logs from dependencies are no longer emitted.
- The
dcos-docker
CLI now gives more feedback to let you know that things are happening.
2018.04.19.0¶
- The AWS backend now supports DC/OS 1.9.
- The Docker backend now supports having custom mounts which apply to all nodes.
- Add
custom-volume
parameter (and similar for each node type) todcos-docker create
.
2018.04.11.0¶
- Add an AWS backend to the library.
- Add ability to control which labels are added to particular node types on the
Docker
backend. - Add support for Ubuntu on the
Docker
backend.
2018.04.02.1¶
- Add a new
dcos-docker doctor
check for suitablesed
for DC/OS 1.9. - Support
cluster.run_integration_tests
on DC/OS 1.9.
2018.04.02.0¶
- Add support for DC/OS 1.9 on Linux hosts.
dcos-docker doctor
returns a status code of1
if there are any errors.- Add a new
dcos-docker doctor
check for free space in the Docker root directory.
2018.03.26.0¶
- Add a
dcos-docker doctor
check that a supported storage driver is available. - Fix error with using Docker version v17.12.1-ce inside Docker nodes.
- Fix race condition between installing DC/OS and SSH starting.
- Remove support for Ubuntu on the Docker backend.
2018.03.07.0¶
- Fix public agents on DC/OS 1.10.
- Remove options to use Fedora and Debian in the
Docker
backend nodes. - Fix the Ubuntu distribution on the
Docker
backend. - Add support for Docker
17.12.1-ce
on nodes in theDocker
backend. - Exceptions in
create
in the CLI point towards thedoctor
command. - Removed a race condition in the
doctor
command. dcos-docker run
now exits with the return code of the command run.dcos-docker destroy-list
is a new command anddcos-docker destroy
now adheres to the common semantics of the CLI.
2018.02.28.0¶
- Add
Vagrantfile
to run DC/OS E2E in a virtual machine. - Add instructions for running DC/OS E2E on Windows.
- Allow relative paths for the build artifact.
2018.02.27.0¶
- Backwards incompatible change: Move
default_ssh_user
parameter fromCluster
toNode
. Thedefault_ssh_user
is now used forNode.run
,Node.popen
andNode.send_file
ifuser
is not supplied.
2018.02.23.0¶
- Add
linux_distribution
parameter to theDocker
backend. - Add support for CoreOS in the
Docker
backend. - Add
docker_version
parameter to theDocker
backend. - The fallback Docker storage driver for the
Docker
backend is nowaufs
. - Add
storage_driver
parameter to theDocker
backend. - Add
docker_container_labels
parameter to theDocker
backend. - Logs are now less cluttered with escape characters.
- Documentation is now on Read The Docs.
- Add a Command Line Interface.
- Vendor
dcos_test_utils
so--process-dependency-links
is not needed. - Backwards incompatible change:
Cluter
’sfiles_to_copy_to_installer
argument is now aList
ofTuple
s rather than aDict
. - Add a
tty
option toNode.run
andCluster.run_integration_tests
.
2018.01.25.0¶
- Backwards incompatible change:
Change the default behavior of
Node.run
andNode.popen
to quote arguments, unless a newshell
parameter isTrue
. These methods now behave similarly tosubprocess.run
. - Add custom string representation for
Node
object. - Bump
dcos-test-utils
for better diagnostics reports.
2018.01.22.0¶
- Expose the
public_ip_address
of the SSH connection and theprivate_ip_address
of its DC/OS component onNode
objects. - Bump
dcos-test-utils
for better diagnostics reports.
2017.12.11.0¶
- Replace the extended
wait_for_dcos_ee
timeout with a precedingdcos-diagnostics
check.
2017.12.08.0¶
- Extend
wait_for_dcos_ee
timeout for waiting until the DC/OS CA cert can be fetched.
2017.11.29.0¶
- Backwards incompatible change:
Introduce separate
wait_for_dcos_oss
andwait_for_dcos_ee
methods. Both methods improve the boot process waiting time for the corresponding DC/OS version. - Backwards incompatible change:
run_integration_tests
now requires users to callwait_for_dcos_oss
orwait_for_dcos_ee
beforehand.
2017.11.21.0¶
- Remove
ExistingCluster
backend and replaced it with simplerCluster.from_nodes
method. - Simplified the default configuration for the Docker backend.
Notably this no longer contains a default
superuser_username
orsuperuser_password_hash
. - Support
custom_agent_mounts
andcustom_public_agent_mounts
on the Docker backend.
2017.11.15.0¶
- Remove
destroy_on_error
anddestroy_on_success
fromCluster
. Instead, avoid usingCluster
as a context manager to keep the cluster alive.
2017.11.14.0¶
- Backwards incompatible change: Rename
DCOS_Docker
backend toDocker
backend. - Backwards incompatible change: Replace
generate_config_path
withbuild_artifact
that can either be aPath
or a HTTP(S) URL string. This allows for supporting installation methods that require build artifacts to be downloaded from a HTTP server. - Backwards incompatible change: Remove
run_as_root
. Instead require adefault_ssh_user
for backends torun
commands over SSH on any clusterNode
created with this backend. - Backwards incompatible change: Split the DC/OS installation from the ClusterManager
__init__
procedure. This allows for installing DC/OS afterCluster
creation, and therefore enables decoupling of transferring files ahead of the installation process. - Backwards incompatible change: Explicit distinction of installation methods by providing separate methods for
install_dcos_from_path
andinstall_dcos_from_url
instead of inspecting the type ofbuild_artifact
. - Backwards incompatible change:
log_output_live
is no longer an attribute of theCluster
class. It may now be passed separately as a parameter for each output-generating operation.
2017.11.02.0¶
- Added
Node.send_file
to allow files to be copied to nodes. - Added
custom_master_mounts
to the DC/OS Docker backend. - Backwards incompatible change: Removed
files_to_copy_to_masters
. Instead, usecustom_master_mounts
orNode.send_file
.
2017.10.04.0¶
- Added Apache2 license.
- Repository moved to
https://github.com/dcos/dcos-e2e
. - Added
run
, which is similar torun_as_root
but takes auser
argument. - Added
popen
, which can be used for running commands asynchronously.
2017.08.11.0¶
- Fix bug where
Node
repr
s were put into environment variables rather than IP addresses. This prevented some integration tests from working.
2017.08.08.0¶
- Fixed issue which prevented
files_to_copy_to_installer
from working.
2017.08.05.0¶
- The Enterprise DC/OS integration tests now require environment variables describing the IP addresses of the cluster. Now passes these environment variables.
2017.06.23.0¶
- Wait for 5 minutes after diagnostics check.
2017.06.22.0¶
- Account for the name of
3dt
having changed todcos-diagnostics
.
2017.06.21.1¶
- Support platforms where
$HOME
is set as/root
. Cluster.wait_for_dcos
now waits for CA cert to be available.
2017.06.21.0¶
- Add ability to specify a workspace.
- Fixed issue with DC/OS Docker files not existing in the repository.
2017.06.20.0¶
- Vendor DC/OS Docker so a path is not needed.
- If
log_output_live
is set toTrue
for aCluster
, logs are shown inwait_for_dcos
.
2017.06.19.0¶
- More storage efficient.
- Removed need to tell
Cluster
whether a cluster is an enterprise cluster. - Removed need to tell
Cluster
thesuperuser_password
. - Added ability to set environment variables on remote nodes when running commands.
2017.06.15.0¶
- Initial release.
Contributing¶
Contributions to this repository must pass tests and linting.
Contents
Install Contribution Dependencies¶
On Ubuntu, install system requirements:
apt install -y gcc python3-dev
Install dependencies in a virtual environment.
pip3 install --editable .[dev]
Optionally install the following tools for linting and interacting with Travis CI:
gem install travis --no-rdoc --no-ri
Spell checking requires enchant
.
This can be installed on macOS, for example, with Homebrew:
brew install enchant
and on Ubuntu with apt
:
apt install -y enchant
Linting Bash requires shellcheck: This can be installed on macOS, for example, with Homebrew:
brew install shellcheck
and on Ubuntu with apt
:
apt-get install -y shellcheck
Linting¶
Install Contribution Dependencies.
Run lint tools:
make lint
These can be run in parallel with:
make lint --jobs --output-sync=target
To fix some lint errors, run the following:
make fix-lint
Tests for this package¶
Some tests require the Docker backend and some tests require the AWS backend. See the Docker backend documentation for details of what is needed for the Docker backend. See the AWS backend documentation for details of what is needed for the AWS backend.
To run the full test suite, set environment variables for DC/OS Enterprise artifact URLs:
export EE_MASTER_ARTIFACT_URL=https://... export EE_1_9_ARTIFACT_URL=https://... export EE_1_10_ARTIFACT_URL=https://... export EE_1_11_ARTIFACT_URL=https://...
Download dependencies which are used by the tests:
python admin/download_artifacts.py
A license key is required for some tests:
cp /path/to/license-key.txt /tmp/license-key.txt
Run pytest
:
pytest
To run the tests concurrently, use pytest-xdist. For example:
pytest -n 2
Documentation¶
Run the following commands to build and open the documentation:
make docs make open-docs
CI¶
Linting and some tests are run on Travis CI.
See .travis.yml
for details on the limitations.
To check if a new change works on CI, unfortunately it is necessary to change .travis.yml
to run the desired tests.
Most of the CLI functionality is not covered by automated tests. Changes should take this into consideration.
Rotating license keys¶
DC/OS Enterprise requires a license key. Mesosphere uses license keys internally for testing, and these expire regularly. A license key is encrypted and used by the Travis CI tests.
To update this link use the following command, after setting the LICENSE_KEY_CONTENTS
environment variable.
This command will affect all builds and not just the current branch.
We do not use encrypted secret files in case the contents are shown in the logs.
We do not add an encrypted environment variable to .travis.yml
because the license is too large.
travis env set --repo dcos/dcos-e2e LICENSE_KEY_CONTENTS $LICENSE_KEY_CONTENTS
Updating the DC/OS Enterprise build artifact links¶
Private links to DC/OS Enterprise artifacts are used by Travis CI.
To update these links use the following commands, after setting the following environment variables:
EE_MASTER_ARTIFACT_URL
EE_1_9_ARTIFACT_URL
EE_1_10_ARTIFACT_URL
EE_1_11_ARTIFACT_URL
travis env set --repo dcos/dcos-e2e EE_MASTER_ARTIFACT_URL $EE_MASTER_ARTIFACT_URL travis env set --repo dcos/dcos-e2e EE_1_9_ARTIFACT_URL $EE_1_9_ARTIFACT_URL travis env set --repo dcos/dcos-e2e EE_1_10_ARTIFACT_URL $EE_1_10_ARTIFACT_URL travis env set --repo dcos/dcos-e2e EE_1_11_ARTIFACT_URL $EE_1_11_ARTIFACT_URL
Updating the Amazon Web Services credentials¶
Private credentials for Amazon Web Services are used by Travis CI.
To update the credentials use the following commands, after setting the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
travis env set --repo dcos/dcos-e2e AWS_ACCESS_KEY_ID $AWS_ACCESS_KEY_ID travis env set --repo dcos/dcos-e2e AWS_SECRET_ACCESS_KEY $AWS_SECRET_ACCESS_KEY
Currently credentials are taken from the OneLogin Secure Notes note dcos-e2e integration testing AWS credentials
.
Parallel builders¶
Travis CI has a maximum test run time of 50 minutes. In order to avoid this and to see failures faster, we run multiple builds per commit. We run almost one builder per test. Some tests are grouped as they can run quickly.
Goals¶
Avoid flakiness¶
For timeouts, err on the side of a much longer timeout than necessary.
Do not access the web while running tests.
Parallelizable Tests¶
The tests in this repository and using this harness are slow. This harness must not get in the way of parallelization efforts.
Logging¶
End to end tests are notoriously difficult to get meaning from. To help with this, an “excessive logging” policy is used here.
Robustness¶
Narrowing down bugs from end to end tests is hard enough without dealing with the framework’s bugs. This repository aims to maintain high standards in terms of coding quality and quality enforcement by CI is part of that.
Version Policy¶
This repository aims to work with DC/OS OSS and DC/OS Enterprise master
branches.
These are moving targets.
For this reason, CalVer is used as a date at which the repository is last known to have worked with DC/OS OSS and DC/OS Enterprise is the main versioning use.
Updating DC/OS Test Utils and DC/OS Launch¶
DC/OS Test Utils and DC/OS Launch are vendored in this repository. To update DC/OS Test Utils or DC/OS Launch:
Update the SHAs in admin/update_vendored_packages.py
.
The following creates a commit with changes to the vendored packages:
admin/update_vendored_packages.sh
Release Process¶
Outcomes¶
- A new
git
tag available to install. - A release on GitHub.
- An updated Homebrew recipe.
- A changed Vagrantfile.
- Linux binaries.
- The new version title in the changelog.
Prerequisites¶
python3
on yourPATH
set to Python 3.5+.- Docker available and set up for your user.
virtualenv
.- Push access to this repository.
- Trust that
master
is ready and high enough quality for release. This includes theNext
section inCHANGELOG.rst
being up to date.
Perform a Release¶
Get a GitHub access token:
Follow the GitHub instructions for getting an access token.
Set environment variables to GitHub credentials, e.g.:
export GITHUB_TOKEN=75c72ad718d9c346c13d30ce762f121647b502414
Perform a release:
export GITHUB_OWNER=dcos curl https://raw.githubusercontent.com/"$GITHUB_OWNER"/dcos-e2e/master/admin/release.sh | bash
Versioning, Support and API Stability¶
DC/OS E2E aims to work with DC/OS OSS and DC/OS Enterprise master
branches.
These are moving targets.
For this reason, CalVer is used as a date at which the repository is last known to have worked with DC/OS OSS and DC/OS Enterprise is the main versioning use.
As well as master
, DC/OS E2E supports the following versions of DC/OS:
- DC/OS 1.11
- DC/OS 1.10
- DC/OS 1.9 (limited support, see DC/OS 1.9 and below)
Other versions may work but are not tested.
See GitHub for releases.
There is no guarantee of API stability at this point. All backwards incompatible changes will be documented in the Changelog.
DC/OS 1.9 and below¶
Installers for DC/OS 1.9 and below require a version of sed
that is not compatible with the BSD sed that ships with macOS.
dcos-docker doctor
includes a check for compatible sed
versions.
To use these versions of DC/OS with macOS and install_dcos_from_path
, we can either modify the installer or modify the local version of sed
.
Modify the installer¶
The following command replaces an installer named dcos_generate_config.sh
with a slightly different installer that works with the default sed
on macOS.
sed \ -e 'H;1h;$!d;x' \ -e "s/sed '0,/sed '1,/" \ dcos_generate_config.sh > dcos_generate_config.sh.bak mv dcos_generate_config.sh.bak dcos_generate_config.sh