DC/OS E2E¶
DC/OS E2E is a tool for spinning up and managing DC/OS clusters in test environments.
Installation¶
DC/OS E2E consists of a Python Library and a dcos-docker CLI.
The CLI works only with the Docker Backend, while the library supports multiple Backends.
The CLI can be installed with Homebrew on macOS, and the library and CLI can be installed together with pip
on any Linux and macOS.
Windows is not currently supported, but we provide instructions on using DC/OS E2E on Windows with Vagrant on particular Backends’ documentation.
CLI macOS With Homebrew¶
To install the CLI on macOS, install Homebrew.
Then install the latest stable version:
brew install https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
To upgrade from an older version, run the following command:
brew upgrade https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
Or the latest master
:
Homebrew installs the dependencies for the latest released version and so installing master
may not work.
brew install --HEAD https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
Run dcos-docker doctor to make sure that your system is ready to go:
$ dcos-docker doctor
Library and CLI with Python¶
If the CLI has been installed with Homebrew, you do not need to install the library to use the CLI.
Requires Python 3.5.2+. To avoid interfering with your system’s Python, we recommend using a virtualenv.
Check the Python version:
python3 --version
On Fedora, install Python development requirements:
sudo dnf install -y git python3-devel
On Ubuntu, install Python development requirements:
apt install -y gcc python3-dev
Optionally replace master
with a particular version of DC/OS E2E.
The latest release is 2018.05.24.2.
See available versions.
If you are not in a virtualenv, you may have to use sudo
before the following command, or --user
after install
.
pip3 install --upgrade git+https://github.com/dcos/dcos-e2e.git@master
Run dcos-docker doctor to make sure that your system is ready to go for the Docker backend:
$ dcos-docker doctor
Getting Started with the Library¶
To create a DC/OS Cluster
, you need a backend.
Backends are customizable, but for now let’s use a standard Docker backend.
Each backend has different system requirements.
See the Docker backend documentation for details of what is needed for the Docker backend.
from dcos_e2e.backends import Docker
from dcos_e2e.cluster import Cluster
cluster = Cluster(cluster_backend=Docker())
It is also possible to use Cluster
as a context manager.
Doing this means that the cluster is destroyed on exit.
To install DC/OS on a cluster, you need a DC/OS build artifact.
You can download one from the DC/OS releases page.
In this example we will use a open source DC/OS artifact downloaded to /tmp/dcos_generate_config.sh
.
from pathlib import Path
oss_artifact = Path('/tmp/dcos_generate_config.sh')
cluster.install_dcos_from_path(
build_artifact=oss_artifact,
dcos_config={
**cluster.base_config,
**{
'resolvers': ['8.8.8.8'],
},
}
)
cluster.wait_for_dcos_oss()
With a Cluster
you can then run commands on arbitrary Node
s.
for master in cluster.masters:
result = master.run(args=['echo', '1'])
print(result.stdout)
There is much more that you can do with Cluster
s and Node
s, and there are other ways to create a cluster.
CLI¶
DC/OS E2E also provides a command line interface for the Docker backend. It allows you to create, manage and destroy DC/OS clusters. See dcos-docker CLI for details.
Reference¶
Installation¶
DC/OS E2E consists of a Python Library and a dcos-docker CLI.
The CLI works only with the Docker Backend, while the library supports multiple Backends.
The CLI can be installed with Homebrew on macOS, and the library and CLI can be installed together with pip
on any Linux and macOS.
Windows is not currently supported, but we provide instructions on using DC/OS E2E on Windows with Vagrant on particular Backends’ documentation.
CLI macOS With Homebrew¶
To install the CLI on macOS, install Homebrew.
Then install the latest stable version:
brew install https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
To upgrade from an older version, run the following command:
brew upgrade https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
Or the latest master
:
Homebrew installs the dependencies for the latest released version and so installing master
may not work.
brew install --HEAD https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
Run dcos-docker doctor to make sure that your system is ready to go:
$ dcos-docker doctor
Library and CLI with Python¶
If the CLI has been installed with Homebrew, you do not need to install the library to use the CLI.
Requires Python 3.5.2+. To avoid interfering with your system’s Python, we recommend using a virtualenv.
Check the Python version:
python3 --version
On Fedora, install Python development requirements:
sudo dnf install -y git python3-devel
On Ubuntu, install Python development requirements:
apt install -y gcc python3-dev
Optionally replace master
with a particular version of DC/OS E2E.
The latest release is 2018.05.24.2.
See available versions.
If you are not in a virtualenv, you may have to use sudo
before the following command, or --user
after install
.
pip3 install --upgrade git+https://github.com/dcos/dcos-e2e.git@master
Run dcos-docker doctor to make sure that your system is ready to go for the Docker backend:
$ dcos-docker doctor
Python Library¶
DC/OS E2E is a tool for spinning up and managing DC/OS clusters in test environments.
It includes a library which is focused on helping you to write tests which require DC/OS clusters.
Getting Started with the Library¶
To create a DC/OS Cluster
, you need a backend.
Backends are customizable, but for now let’s use a standard Docker backend.
Each backend has different system requirements.
See the Docker backend documentation for details of what is needed for the Docker backend.
from dcos_e2e.backends import Docker
from dcos_e2e.cluster import Cluster
cluster = Cluster(cluster_backend=Docker())
It is also possible to use Cluster
as a context manager.
Doing this means that the cluster is destroyed on exit.
To install DC/OS on a cluster, you need a DC/OS build artifact.
You can download one from the DC/OS releases page.
In this example we will use a open source DC/OS artifact downloaded to /tmp/dcos_generate_config.sh
.
from pathlib import Path
oss_artifact = Path('/tmp/dcos_generate_config.sh')
cluster.install_dcos_from_path(
build_artifact=oss_artifact,
dcos_config={
**cluster.base_config,
**{
'resolvers': ['8.8.8.8'],
},
}
)
cluster.wait_for_dcos_oss()
With a Cluster
you can then run commands on arbitrary Node
s.
for master in cluster.masters:
result = master.run(args=['echo', '1'])
print(result.stdout)
There is much more that you can do with Cluster
s and Node
s, and there are other ways to create a cluster.
The Cluster
class¶
Using DC/OS E2E usually involves creating one or more Cluster
s.
A cluster is created using a “backend”, which might be Docker or a cloud provider for example.
It is also possible to point DC/OS E2E to existing nodes.
A Cluster
object is then used to interact with the DC/OS cluster.
-
class
dcos_e2e.cluster.
Cluster
(cluster_backend, masters=1, agents=1, public_agents=1, files_to_copy_to_installer=())¶ Create a DC/OS cluster.
Parameters: - cluster_backend (
ClusterBackend
) – The backend to use for the cluster. - masters (
int
) – The number of master nodes to create. - agents (
int
) – The number of agent nodes to create. - public_agents (
int
) – The number of public agent nodes to create. - files_to_copy_to_installer (
Iterable
[Tuple
[Path
,Path
]]) – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
- cluster_backend (
Choosing a Backend¶
See Backends for a backend to use for cluster_backend
.
Creating a Cluster
from Existing Node
s¶
It is possible to create a Cluster
from existing nodes.
Cluster
s created with this method cannot be destroyed by DC/OS E2E.
It is assumed that DC/OS is already up and running on the given Node
s and installing DC/OS is not supported.
-
classmethod
Cluster.
from_nodes
(masters, agents, public_agents)¶ Create a cluster from existing nodes.
Parameters: Return type: Returns: A cluster object with the nodes of an existing cluster.
Installing DC/OS¶
Some backends support installing DC/OS from a path to a build artifact. Some backends support installing DC/OS from a URL pointing to a build artifact. See how to use DC/OS Enterprise with DC/OS E2E.
-
Cluster.
install_dcos_from_path
(build_artifact, dcos_config, log_output_live=False)¶ Parameters: Raises: NotImplementedError
– NotImplementedError because it is more efficient for the given backend to use the DC/OS advanced installation method that takes build artifacts by URL string.Return type: None
-
Cluster.
install_dcos_from_url
(build_artifact, dcos_config, log_output_live=False)¶ Installs DC/OS using the DC/OS advanced installation method if supported by the backend.
This method spins up a persistent bootstrap host that supplies all dedicated DC/OS hosts with the necessary installation files.
Since the bootstrap host is different from the host initiating the cluster creation passing the
build_artifact
via URL string saves the time of copying thebuild_artifact
to the bootstrap host.Parameters: Raises: NotImplementedError
– NotImplementedError because the given backend provides a more efficient installation method than the DC/OS advanced installation method.Return type: None
Destroying a Cluster
¶
Cluster
s have a destroy()
method.
This can be called manually, or Cluster
s can be used as context managers.
In this case the cluster will be destroyed when exiting the context manager.
with Cluster(backend=Docker(), masters=3, agents=2):
pass
-
Cluster.
destroy
()¶ Destroy all nodes in the cluster.
Return type: None
Waiting for DC/OS¶
Depending on the hardware and the backend, DC/OS can take some time to install.
The methods to wait for DC/OS repeatedly poll the cluster until services are up.
Choose the wait_for_dcos_oss()
or wait_for_dcos_ee()
as appropriate.
-
Cluster.
wait_for_dcos_oss
()¶ Wait until the DC/OS OSS boot process has completed.
Raises: RetryError
– Raised if any cluster component did not become healthy in time.Return type: None
-
Cluster.
wait_for_dcos_ee
(superuser_username, superuser_password)¶ Wait until the DC/OS Enterprise boot process has completed.
Parameters: Raises: RetryError
– Raised if any cluster component did not become healthy in time.Return type: None
Running Integration Tests¶
It is possible to easily run DC/OS integration tests on a cluster. See how to run tests on DC/OS Enterprise.
with Cluster(backend=Docker()):
cluster.run_integration_tests(pytest_command=['pytest', '-k', 'mesos'])
-
Cluster.
run_integration_tests
(pytest_command, env=None, log_output_live=False, tty=False, test_host=None)¶ Run integration tests on a random master node.
Parameters: - pytest_command (
List
[str
]) – Thepytest
command to run on the node. - env (
Optional
[Dict
[str
,Any
]]) – Environment variables to be set on the node before running the pytest_command. On enterprise clusters,DCOS_LOGIN_UNAME
andDCOS_LOGIN_PW
must be set. - log_output_live (
bool
) – IfTrue
, log output of thepytest_command
live. IfTrue
,stderr
is merged intostdout
in the return value. - test_host (
Optional
[Node
]) – The node to run the given command on. if not given, an arbitrary master node is used. - tty (
bool
) – IfTrue
, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. This means that the values of stdout and stderr will not be in the returnedsubprocess.CompletedProcess
.
Return type: Returns: The result of the
pytest
command.Raises: subprocess.CalledProcessError
– If thepytest
command fails.- pytest_command (
Backends¶
DC/OS E2E comes with some backends and it is also possible to create custom backends.
Docker Backend¶
The Docker backend is user to spin up clusters on Docker containers, where each container is a DC/OS node.
Requirements¶
Docker version 17.06 or later must be installed.
Plenty of memory must be given to Docker. On Docker for Mac, this can be done from Docker > Preferences > Advanced. This backend has been tested with a four node cluster with 9 GB memory given to Docker.
On macOS, hosts cannot connect to containers IP addresses by default. Once the CLI is installed, run dcos-docker setup-mac-network.
ssh
¶The ssh
command must be available.
This tool has been tested on macOS with Docker for Mac and on Linux.
It has also been tested on Windows on Vagrant.
The only supported way to use the Docker backend on Windows is using Vagrant and VirtualBox.
- Ensure Virtualization and VT-X support is enabled in your PC’s BIOS. Disable Hyper-V virtualization. See https://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/.
- Install VirtualBox and VirtualBox Extension Pack.
- Install Vagrant.
- Install the Vagrant plugin for persistent disks:
vagrant plugin install vagrant-persistent-storage
- Optionally install the Vagrant plugins to cache package downloads and keep guest additions updates:
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-vbguest
- Start Powershell and download the DC/OS E2E
Vagrantfile
to a directory containing a DC/OS installer file:
((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/dcos/dcos-e2e/master/vagrant/Vagrantfile')) | Set-Content -LiteralPath Vagrantfile
- By default, the
Vagrantfile
installs DC/OS E2E from the most recent release at the time it is downloaded. To use a different release, or any Git reference, set the environment variableDCOS_E2E_REF
:
$env:DCOS_E2E_REF = "master"
- Start the virtual machine and login:
vagrant up
vagrant ssh
You can now run dcos-docker CLI commands or use the Python Library.
To connect to the cluster nodes from the Windows host (e.g. to use the DC/OS web interface), in PowerShell Run as Administrator, and add the Virtual Machine as a gateway:
route add 172.17.0.0 MASK 255.255.0.0 192.168.18.2
To shutdown, logout of the virtual machine shell, and destroy the virtual machine and disk:
vagrant destroy
The route will be removed on reboot. You can manually remove the route in PowerShell Run as Administrator using:
route delete 172.17.0.0
dcos-docker doctor
¶DC/OS E2E comes with the dcos-docker doctor command. Run this command to check your system for common causes of problems.
DC/OS Installation¶
Cluster
s created by the Docker backend only support installing DC/OS via install_dcos_from_path()
.
Node
s of Cluster
s created by the Docker backend do not distinguish between public_ip_address
and private_ip_address
.
Limitations¶
Docker does not represent a real DC/OS environment with complete accuracy. This section describes the currently known differences between the Docker backend and a real DC/OS environment.
Tests inherit the host’s environment. Any tests that rely on SELinux being available require it be available on the host.
Docker does not support storage features expected in a real DC/OS environment.
Troubleshooting¶
If a test is interrupted, it can leave behind containers, volumes and files. To remove these, run the following:
docker stop $(docker ps -a -q --filter="name=dcos-e2e")
docker rm --volumes $(docker ps -a -q --filter="name=dcos-e2e")
docker volume prune --force
If this repository is available, run make clean
.
On macOS /tmp
is a symlink to /private/tmp
.
/tmp
is used by the harness.
Docker for Mac must be configured to allow /private
to be bind mounted into Docker containers.
This is the default.
See Docker > Preferences > File Sharing.
On various platforms, the clock can get out of sync between the host machine and Docker containers.
This is particularly problematic if using check_time: true
in the DC/OS configuration.
To work around this, run docker run --rm --privileged alpine hwclock -s
.
Reference¶
-
class
dcos_e2e.backends.
Docker
(workspace_dir=None, custom_container_mounts=None, custom_master_mounts=None, custom_agent_mounts=None, custom_public_agent_mounts=None, linux_distribution=<Distribution.CENTOS_7: 1>, docker_version=<DockerVersion.v1_13_1: 2>, storage_driver=None, docker_container_labels=None, docker_master_labels=None, docker_agent_labels=None, docker_public_agent_labels=None)¶ Create a configuration for a Docker cluster backend.
Parameters: - workspace_dir (
Optional
[Path
]) – The directory in which large temporary files will be created. These files will be deleted at the end of a test run. This is equivalent to dir intempfile.mkstemp()
. - custom_container_mounts (
Optional
[List
[Mount
]]) – Custom mounts add to all node containers. See mounts in Containers.run. - custom_master_mounts (
Optional
[List
[Mount
]]) – Custom mounts add to master node containers. See mounts in Containers.run. - custom_agent_mounts (
Optional
[List
[Mount
]]) – Custom mounts add to agent node containers. See mounts in Containers.run. - custom_public_agent_mounts (
Optional
[List
[Mount
]]) – Custom mounts add to public agent node containers. See mounts in Containers.run. - linux_distribution (
Distribution
) – The Linux distribution to boot DC/OS on. - docker_version (
DockerVersion
) – The Docker version to install on the cluster nodes. - storage_driver (
Optional
[DockerStorageDriver
]) – The storage driver to use for Docker on the cluster nodes. By default, this is the host’s storage driver. If this is not one ofaufs
,overlay
oroverlay2
,aufs
is used. - docker_container_labels (
Optional
[Dict
[str
,str
]]) – Docker labels to add to the cluster node containers. Akin to the dictionary option in Containers.run. - docker_master_labels (
Optional
[Dict
[str
,str
]]) – Docker labels to add to the cluster master node containers. Akin to the dictionary option in Containers.run. - docker_agent_labels (
Optional
[Dict
[str
,str
]]) – Docker labels to add to the cluster agent node containers. Akin to the dictionary option in Containers.run. - docker_public_agent_labels (
Optional
[Dict
[str
,str
]]) – Docker labels to add to the cluster public agent node containers. Akin to the dictionary option in Containers.run.
-
workspace_dir
¶ The directory in which large temporary files will be created. These files will be deleted at the end of a test run.
-
custom_container_mounts
¶ Custom mounts add to all node containers. See mounts in Containers.run.
-
custom_master_mounts
¶ Custom mounts add to master node containers. See mounts in Containers.run.
-
custom_agent_mounts
¶ Custom mounts add to agent node containers. See mounts in Containers.run.
-
custom_public_agent_mounts
¶ Custom mounts add to public agent node containers. See mounts in Containers.run.
-
linux_distribution
¶ The Linux distribution to boot DC/OS on.
-
docker_version
¶ The Docker version to install on the cluster nodes.
-
docker_storage_driver
¶ The storage driver to use for Docker on the cluster nodes.
-
docker_container_labels
¶ Docker labels to add to the cluster node containers. Akin to the dictionary option in Containers.run.
-
docker_master_labels
¶ Docker labels to add to the cluster master node containers. Akin to the dictionary option in Containers.run.
-
docker_agent_labels
¶ Docker labels to add to the cluster agent node containers. Akin to the dictionary option in Containers.run.
-
docker_public_agent_labels
¶ Docker labels to add to the cluster public agent node containers. Akin to the dictionary option in Containers.run.
- workspace_dir (
AWS Backend¶
The AWS backend is used to spin up clusters using EC2 instances on Amazon Web Services, where each instance is a DC/OS node.
Requirements¶
An Amazon Web Services account with sufficient funds must be available.
The AWS credentials for the account must be present either in the environment as environment variables or in the default file system location under ~/.aws/credentials
with a AWS profile in the environment referencing those credentials.
The Mesosphere internal AWS tool maws automatically stores account specific temporary AWS credentials in the default file system location and exports the corresponding profile into the environment. After logging in with maws clusters can be launched using the AWS backend.
For CI deployments long lived credentials are preferred. It is recommended to use the environment variables method for AWS credentials in that case.
The environment variables are set as follows:
export AWS_ACCESS_KEY_ID=<aws_access_key_id>
export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
The EC2 instances launched by the AWS backend will bring about costs in the order of 24 ct per instance, assuming the fixed cluster lifetime of two hours and m4.large
EC2 instances.
ssh
¶The ssh
command must be available.
The AWS backend has been tested on macOS and on Linux.
It is not expected that it will work out of the box with Windows, see issue QUALITY-1771.
If your operating system is not supported, it may be possible to use Vagrant, or another Linux virtual machine.
DC/OS Installation¶
Cluster
s created by the AWS
backend only support installing DC/OS via install_dcos_from_url()
.
This is because the installation method employs a bootstrap node that directly downloads the build_artifact
from the specified URL.
Node
s of Cluster
s created by the AWS
backend distinguish between public_ip_address
and private_ip_address
.
The private_ip_address
refers to the internal network of the AWS stack which is also used by DC/OS internally.
The public_ip_address
allows for reaching AWS EC2 instances from the outside e.g. from the dcos-e2e
testing environment.
AWS Regions¶
When launching a cluster with Amazon Web Services there are a number of different regions to choose from where the cluster is launched using aws_region
.
It is recommended to use us-west-1
or us-west-2
to keep the cost low.
See the AWS Regions and Availability Zones for available regions.
Restricting access to the cluster¶
The AWS backend takes a parameter admin_location
.
This parameter restricts the access to the AWS stack from the outside to a particular IP address range.
The default value '0.0.0.0/0'
will allow accessing the cluster from anywhere.
It is recommended to restrict the address range to a subnet including the public IP of the machine executing tests with the AWS backend.
For example <external-ip>/24
.
Accessing cluster nodes¶
SSH can be used to access cluster nodes for the purpose of debugging if workspace_dir
is set.
The AWS backend generates a SSH key file id_rsa
in a cluster-specific sub-directory under the workspace_dir
directory. The sub-directory is named after the unique cluster ID generated during cluster creation. The cluster ID is prefixed with dcos-e2e-
and can be found through the DC/OS UI in the upper left corner or through the CCM UI when using maws with a Mesosphere AWS account.
Adding this key to the ssh-agent
or manually providing it via the -i
flag after changing its file permissions to 400
will allow for connecting to the cluster via the ssh
command.
The SSH user depends on the linux_distribution
given to the AWS
backend.
For CENTOS_7
that is centos
.
It is important to keep in mind files in the given workspace_dir
are temporary and are removed when the cluster is destroyed.
If workspace_dir
is unset the AWS
backend will create a new temporary directory in an operating system specific location.
Cluster lifetime¶
The cluster lifetime is fixed at two hours.
If the cluster was launched with maws (Mesosphere temporary AWS credentials) the cluster can be controlled via CCM. This allows for extending the cluster lifetime and also for cleaning up the cluster if anything goes wrong.
EC2 instance types¶
Currently the AWS backend launches m4.large
instances for all DC/OS nodes.
Unsupported DC/OS versions¶
The AWS backend does currently not support DC/OS versions below 1.10. Adding support for DC/OS 1.9 is tracked in issue DCOS-21960.
Unsupported features¶
The AWS backend does currently not support the Cluster
feature of copying files to the DC/OS installer by supplying files_to_copy_to_installer
.
The progress on this feature is tracked in issue DCOS-21894.
Troubleshooting¶
In case of an error during the DC/OS installation the journal from each node will be dumped and downloaded to the folder that the tests were executed in.
The logs are prefixed with the installation phase that failed, preflight
, deploy
or postflight
.
When using temporary credentials it is required to pay attention that the credentials are still valid or renewed when destroying a cluster. If the credentials are not valid anymore the AWS backend does not delete the public/private key pair generated during cluster creation. It is therefore recommended to periodically renew temporary AWS credentials when executing tests using the AWS backend.
In rare cases it might also happen that a AWS stack deployment fails with the message ROLLBACK_IN_PROGRESS
.
In that case at least one of the EC2 instances failed to come up. Starting a new cluster is the only option then.
Reference¶
-
class
dcos_e2e.backends.
AWS
(aws_region='us-west-2', admin_location='0.0.0.0/0', linux_distribution=<Distribution.CENTOS_7: 1>, workspace_dir=None)¶ Create a configuration for an AWS cluster backend.
Parameters: - admin_location (
str
) – The IP address range from which the AWS nodes can be accessed. - aws_region (
str
) – The AWS location to create nodes in. See Regions and Availability Zones. - linux_distribution (
Distribution
) – The Linux distribution to boot DC/OS on. - workspace_dir (
Optional
[Path
]) – The directory in which large temporary files will be created. These files will be deleted at the end of a test run. This is equivalent to dir intempfile.mkstemp()
.
-
admin_location
¶ The IP address range from which the AWS nodes can be accessed.
-
aws_region
¶ The AWS location to create nodes in. See Regions and Availability Zones.
-
linux_distribution
¶ The Linux distribution to boot DC/OS on.
-
workspace_dir
¶ The directory in which large temporary files will be created. These files will be deleted at the end of a test run.
Raises: NotImplementedError
– In case an unsupported Linux distribution has been passed in at backend creation.- admin_location (
Custom Backends¶
DC/OS E2E supports pluggable backends. You may wish to create a new backend to support a new cloud provider for example.
How to Create a Custom Backend¶
To create a cluster Cluster backend, you need to create two classes.
You need to create a ClusterManager
and a ClusterBackend
.
A ClusterBackend
may take custom parameters and is useful for storing backend-specific options.
A ClusterManager
implements the nuts and bolts of cluster management for a particular backend.
This implements things like creating nodes and installing DC/OS on those nodes.
Please consider contributing your backend to this repository if it is stable and could be of value to a wider audience.
References¶
-
class
dcos_e2e.backends._base_classes.
ClusterBackend
¶ Cluster backend base class.
-
cluster_cls
¶ Return the
ClusterManager
class to use to create and manage a cluster.Return type: Type
[ClusterManager
]
-
-
class
dcos_e2e.backends._base_classes.
ClusterManager
(masters, agents, public_agents, files_to_copy_to_installer, cluster_backend)¶ Create a DC/OS cluster with the given
cluster_backend
.Parameters: - masters (
int
) – The number of master nodes to create. - agents (
int
) – The number of agent nodes to create. - public_agents (
int
) – The number of public agent nodes to create. - files_to_copy_to_installer (
List
[Tuple
[Path
,Path
]]) – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS. - cluster_backend (
ClusterBackend
) – Details of the specific DC/OS Docker backend to use.
-
install_dcos_from_url
(build_artifact, dcos_config, log_output_live)¶ Install DC/OS from a build artifact passed as an URL string.
Parameters: Return type: None
-
install_dcos_from_path
(build_artifact, dcos_config, log_output_live)¶ Install DC/OS from a build artifact passed as a file system Path.
Parameters: Return type: None
-
destroy
()¶ Destroy all nodes in the cluster.
Return type: None
- masters (
Cluster Node
s¶
Cluster
s are made of Node
s.
The Node
interface is backend agnostic.
Node
s are generally used to run commands.
Node
s are either manually constructed in order to create a from_nodes()
, or they are retrieved from an existing Cluster
.
-
class
dcos_e2e.node.
Node
(public_ip_address, private_ip_address, default_ssh_user, ssh_key_path)¶ Parameters: - public_ip_address (
IPv4Address
) – The public IP address of the node. - private_ip_address (
IPv4Address
) – The IP address used by the DC/OS component running on this node. - default_ssh_user (
str
) – The default username to use for SSH connections. - ssh_key_path (
Path
) – The path to an SSH key which can be used to SSH to the node as thedefault_ssh_user
user.
-
public_ip_address
¶ The public IP address of the node.
-
private_ip_address
¶ The IP address used by the DC/OS component running on this node.
-
default_ssh_user
¶ The default username to use for SSH connections.
- public_ip_address (
Running a Command on a Node¶
There are two methods used to run commands on Node
s.
run
and popen
are roughly equivalent to their subprocess
namesakes.
-
Node.
run
(args, user=None, log_output_live=False, env=None, shell=False, tty=False)¶ Run a command on this node the given user.
Parameters: - args (
List
[str
]) – The command to run on the node. - user (
Optional
[str
]) – The username to SSH as. IfNone
then thedefault_ssh_user
is used instead. - log_output_live (
bool
) – IfTrue
, log output live. IfTrue
, stderr is merged into stdout in the return value. - env (
Optional
[Dict
[str
,Any
]]) – Environment variables to be set on the node before running the command. A mapping of environment variable names to values. - shell (
bool
) – IfFalse
(the default), each argument is passed as a literal value to the command. If True, the command line is interpreted as a shell command, with a special meaning applied to some characters (e.g. $, &&, >). This means the caller must quote arguments if they may contain these special characters, including whitespace. - tty (
bool
) – IfTrue
, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. This means that the values of stdout and stderr will not be in the returnedsubprocess.CompletedProcess
.
Return type: Returns: The representation of the finished process.
Raises: subprocess.CalledProcessError
– The process exited with a non-zero code.- args (
-
Node.
popen
(args, user=None, env=None, shell=False)¶ Open a pipe to a command run on a node as the given user.
Parameters: - args (
List
[str
]) – The command to run on the node. - user (
Optional
[str
]) – The user to open a pipe for a command for over SSH. If None thedefault_ssh_user
is used instead. - env (
Optional
[Dict
[str
,Any
]]) – Environment variables to be set on the node before running the command. A mapping of environment variable names to values. - shell (
bool
) – If False (the default), each argument is passed as a literal value to the command. If True, the command line is interpreted as a shell command, with a special meaning applied to some characters (e.g. $, &&, >). This means the caller must quote arguments if they may contain these special characters, including whitespace.
Return type: Returns: The pipe object attached to the specified process.
- args (
Using DC/OS Enterprise¶
DC/OS Enterprise requires various configuration variables which are not allowed or required by open source DC/OS.
The following example shows how to use DC/OS Enterprise with DC/OS E2E.
from pathlib import Path
from dcos_e2e.backends import Docker
from dcos_e2e.cluster import Cluster
from passlib.hash import sha512_crypt
ee_artifact = Path('/tmp/dcos_generate_config.ee.sh')
license_key_contents = Path('/tmp/license-key.txt').read_text()
superuser_username = 'my_username'
superuser_password = 'my_password'
extra_config = {
'superuser_username': superuser_username,
'superuser_password_hash': sha512_crypt.hash(superuser_password),
'fault_domain_enabled': False,
'license_key_contents': license_key_contents,
}
with Cluster(cluster_backend=Docker()) as cluster:
cluster.install_dcos_from_path(
build_artifact=ee_artifact,
base_config={
**cluster.base_config,
**extra_config,
},
)
cluster.wait_for_dcos_ee(
superuser_username=superuser_username,
superuser_password=superuser_password,
)
cluster.run_integration_tests(
env={
'DCOS_LOGIN_UNAME': superuser_username,
'DCOS_LOGIN_PW': superuser_password,
}
pytest_command=['pytest', '-k', 'tls'],
)
Linux Distributions¶
Some backends support multiple Linux distributions on nodes. Not all distributions are necessarily fully supported by DC/OS. See particular backend configuration classes for options.
Docker Versions¶
Some backends support multiple Docker versions on nodes. Not all Docker versions are necessarily fully supported by DC/OS. See particular backend configuration classes for options.
Docker Storage Drivers¶
Some backends support multiple Docker storage drivers nodes. Not all distributions are necessarily fully supported by DC/OS. See particular backend configuration classes for options.
dcos-docker
CLI¶
The dcos-docker
CLI allows you to create, manage and destroy open source DC/OS and DC/OS Enterprise clusters on Docker nodes.
A typical CLI workflow for open source DC/OS may look like the following. Install the CLI, then create, manage and destroy a cluster:
# Fix issues shown by dcos-docker doctor
$ dcos-docker doctor
$ dcos-docker create /tmp/dcos_generate_config.sh --agents 0 --cluster-id default
default
# Without specifying a cluster ID for ``wait`` and ``run``, ``default``
# is automatically used.
$ dcos-docker wait
$ dcos-docker run --sync-dir /path/to/dcos/checkout pytest -k test_tls
...
$ dcos-docker destroy
Each of these and more are described in detail below.
Requirements¶
Docker¶
Docker version 17.06 or later must be installed.
Plenty of memory must be given to Docker. On Docker for Mac, this can be done from Docker > Preferences > Advanced. This backend has been tested with a four node cluster with 9 GB memory given to Docker.
IP Routing Set Up for Docker¶
On macOS, hosts cannot connect to containers IP addresses by default. Once the CLI is installed, run dcos-docker setup-mac-network.
Operating System¶
This tool has been tested on macOS with Docker for Mac and on Linux.
It has also been tested on Windows on Vagrant.
Windows¶
The only supported way to use the Docker backend on Windows is using Vagrant and VirtualBox.
- Ensure Virtualization and VT-X support is enabled in your PC’s BIOS. Disable Hyper-V virtualization. See https://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/.
- Install VirtualBox and VirtualBox Extension Pack.
- Install Vagrant.
- Install the Vagrant plugin for persistent disks:
vagrant plugin install vagrant-persistent-storage
- Optionally install the Vagrant plugins to cache package downloads and keep guest additions updates:
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-vbguest
- Start Powershell and download the DC/OS E2E
Vagrantfile
to a directory containing a DC/OS installer file:
((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/dcos/dcos-e2e/master/vagrant/Vagrantfile')) | Set-Content -LiteralPath Vagrantfile
- By default, the
Vagrantfile
installs DC/OS E2E from the most recent release at the time it is downloaded. To use a different release, or any Git reference, set the environment variableDCOS_E2E_REF
:
$env:DCOS_E2E_REF = "master"
- Start the virtual machine and login:
vagrant up
vagrant ssh
You can now run dcos-docker CLI commands or use the Python Library.
To connect to the cluster nodes from the Windows host (e.g. to use the DC/OS web interface), in PowerShell Run as Administrator, and add the Virtual Machine as a gateway:
route add 172.17.0.0 MASK 255.255.0.0 192.168.18.2
To shutdown, logout of the virtual machine shell, and destroy the virtual machine and disk:
vagrant destroy
The route will be removed on reboot. You can manually remove the route in PowerShell Run as Administrator using:
route delete 172.17.0.0
dcos-docker doctor
¶
DC/OS E2E comes with the dcos-docker doctor command. Run this command to check your system for common causes of problems.
Installation¶
The CLI can be installed with Homebrew on macOS, and the library and CLI can be installed together with pip
on any Linux and macOS.
See “Operating System” requirements for instructions on using the CLI on Windows in a Vagrant VM.
CLI macOS With Homebrew¶
To install the CLI on macOS, install Homebrew.
Then install the latest stable version:
brew install https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
To upgrade from an older version, run the following command:
brew upgrade https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
Or the latest master
:
Homebrew installs the dependencies for the latest released version and so installing master
may not work.
brew install --HEAD https://raw.githubusercontent.com/dcos/dcos-e2e/master/dcosdocker.rb
Run dcos-docker doctor to make sure that your system is ready to go:
$ dcos-docker doctor
Library and CLI with Python¶
If the CLI has been installed with Homebrew, you do not need to install the library to use the CLI.
Requires Python 3.5.2+. To avoid interfering with your system’s Python, we recommend using a virtualenv.
Check the Python version:
python3 --version
On Fedora, install Python development requirements:
sudo dnf install -y git python3-devel
On Ubuntu, install Python development requirements:
apt install -y gcc python3-dev
Optionally replace master
with a particular version of DC/OS E2E.
The latest release is 2018.05.24.2.
See available versions.
If you are not in a virtualenv, you may have to use sudo
before the following command, or --user
after install
.
pip3 install --upgrade git+https://github.com/dcos/dcos-e2e.git@master
Run dcos-docker doctor to make sure that your system is ready to go for the Docker backend:
$ dcos-docker doctor
Creating a Cluster¶
To create a cluster you first need to download a DC/OS release.
DC/OS Enterprise is also supported. Ask your sales representative for release artifacts.
Creating a cluster is possible with the dcos-docker create
command.
This command allows you to customize the cluster in many ways.
See the dcos-docker create reference for details on this command and its options.
The command returns when the DC/OS installation process has started. To wait until DC/OS has finished installing, use the the dcos-docker wait command.
To use this cluster, it is useful to find details using the the dcos-docker inspect command.
DC/OS Enterprise¶
There are multiple DC/OS Enterprise-only features available in dcos-docker create.
The only extra requirement is to give a valid license key, for DC/OS 1.11+. See the dcos-docker create reference for details on how to provide a license key.
Ask your sales representative for DC/OS Enterprise release artifacts.
For, example, run the following to create a DC/OS Enterprise cluster in strict mode:
$ dcos-docker create /path/to/dcos_generate_config.ee.sh \
--license-key /path/to/license.txt \
--security-mode strict \
--cluster-id default
The command returns when the DC/OS installation process has started. To wait until DC/OS has finished installing, use the dcos-docker wait command.
See the dcos-docker create reference for details on this command and its options.
“default” Cluster ID¶
It can become tedious repeatedly typing the cluster ID, particularly if you only have one cluster.
As a convenience, any command which takes a cluster-id
option,
apart from create
,
defaults to using “default” if no cluster ID is given.
This means that you can use --cluster-id=default
and then use dcos-docker wait
with no arguments to wait for the default
cluster.
Running commands on Cluster Nodes¶
It is possible to run commands on a cluster node in multiple ways.
These include using dcos-docker run, docker exec
and ssh
.
Running commands on a cluster node using dcos-docker run¶
It is possible to run the following to run a command on an arbitrary master node.
$ dcos-docker run --cluster-id example systemctl list-units
See the dcos-docker run reference for more information on this command.
In particular see the --node
option to choose a particular node to run the command on.
Running commands on a cluster node using docker exec
¶
Each cluster node is a Docker container.
This means that you can use tools such as docker exec
to run commands on nodes.
To do this, first choose the container ID of a node.
Use dcos-docker inspect to see all node container IDs.
Alternatively, use the --env
flag to output commands to be evaluated as such:
$ eval $(dcos-docker inspect --cluster-id example --env)
$ docker exec -it $MASTER_0 systemctl list-units
Which environment variables are available depends on the size of your cluster.
Running commands on a cluster node using ssh
¶
One SSH key allows access to all nodes in the cluster.
See this SSH key’s path and the IP addresses of nodes using dcos-docker inspect.
The available SSH user is root
.
Getting on to a Cluster Node¶
Sometimes it is useful to get onto a cluster node. To do this, you can use any of the ways of Running commands on Cluster Nodes.
For example, to use dcos-docker run to run bash
to get on to an arbitrary master node:
$ dcos-docker run --cluster-id example bash
or, similarly, to use docker exec
to get on to a specific node:
$ eval $(dcos-docker inspect --cluster-id example --env)
$ docker exec -it $MASTER_0 bash
See Running commands on Cluster Nodes for details on how to choose particular nodes.
Destroying Clusters¶
There are two commands which can be used to destroy clusters. These are dcos-docker destroy and dcos-docker destroy-list.
Either destroy a cluster with dcos-docker destroy:
$ dcos-docker destroy
default
$ dcos-docker destroy pr_4033_strict
pr_4033_strict
or use dcos-docker destroy-list to destroy multiple clusters:
$ dcos-docker destroy-list pr_4033_strict pr_4019_permissive
pr_4033_strict
pr_4019_permissive
To destroy all clusters, run the following command:
$ dcos-docker destroy-list $(dcos-docker list)
pr_4033_strict
pr_4019_permissive
Viewing Debug Information¶
The CLI is quiet by default.
To see more information, use -v
or -vv
after dcos-docker
.
Running Integration Tests¶
The dcos-docker run command is useful for running integration tests.
To run integration tests which are developed in the a DC/OS checkout at /path/to/dcos
, you can use the following workflow:
$ dcos-docker create /tmp/dcos_generate_config.ee.sh --cluster-id default
$ dcos-docker wait
$ dcos-docker run --sync-dir /path/to/dcos/checkout pytest -k test_tls.py
There are multiple options and shortcuts for using these commands. See the dcos-docker run reference for more information on this command.
Viewing the Web UI¶
To view the web UI of your cluster, use the dcos-docker web command. If you instead want to view the web UI URL of your cluster, use the dcos-docker inspect command.
Before viewing the UI, you may first need to configure your browser to trust your DC/OS CA, or choose to override the browser protection.
Using a Custom CA Certificate¶
On DC/OS Enterprise clusters, it is possible to use a custom CA certificate. See the Custom CA certificate documentation for details. It is possible to use dcos-docker create to create a cluster with a custom CA certificate.
Create or obtain the necessary files:
dcos-ca-certificate.crt
,dcos-ca-certificate-key.key
, anddcos-ca-certificate-chain.crt
.Put the above-mentioned files, into a directory, e.g.
/path/to/genconf/
.Create a file containing the “extra” configuration.
dcos-docker create takes an
--extra-config
option. This adds the contents of the specified YAML file to a minimal DC/OS configuration.Create a file with the following contents:
ca_certificate_path: genconf/dcos-ca-certificate.crt ca_certificate_key_path: genconf/dcos-ca-certificate-key.key ca_certificate_chain_path: genconf/dcos-ca-certificate-chain.crt
Create a cluster.
dcos-docker create \ /path/to/dcos_generate_config.ee.sh \ --genconf-dir /path/to/genconf/ \ --copy-to-master /path/to/genconf/dcos-ca-certificate-key.key:/var/lib/dcos/pki/tls/CA/private/custom_ca.key \ --license-key /path/to/license.txt \ --extra-config config.yml \ --cluster-id default
Verify that everything has worked.
See Verify installation for steps to verify that the DC/OS Enterprise cluster was installed properly with the custom CA certificate.
Limitations¶
Docker does not represent a real DC/OS environment with complete accuracy. This section describes the currently known differences between the Docker backend and a real DC/OS environment.
CLI Reference¶
dcos-docker¶
Manage DC/OS clusters on Docker.
dcos-docker [OPTIONS] COMMAND [ARGS]...
Options
-
--version
¶
Show the version and exit.
-
-v
,
--verbose
¶
Commands
-
create
Create a DC/OS cluster.
-
destroy
Destroy a cluster.
-
destroy-list
Destroy clusters.
-
destroy-mac-network
Destroy containers created by “dcos-docker…
-
doctor
Diagnose common issues which stop DC/OS E2E…
-
inspect
Show cluster details.
-
list
List all clusters.
-
run
Run an arbitrary command on a node.
-
setup-mac-network
Set up a network to connect to nodes on…
-
sync
Sync files from a DC/OS checkout to master…
-
wait
Wait for DC/OS to start.
-
web
Open the browser at the web UI.
dcos-docker create¶
Create a DC/OS cluster.
DC/OS Enterprise
DC/OS Enterprise clusters require different configuration variables to DC/OS OSS. For example, enterprise clusters require the following configuration parameters:
superuser_username
,superuser_password_hash
,fault_domain_enabled
,license_key_contents
These can all be set in
--extra-config
. However, some defaults are provided for all but the license key.The default superuser username is
admin
. The default superuser password isadmin
. The defaultfault_domain_enabled
isfalse
.
license_key_contents
must be set for DC/OS Enterprise 1.11 and above. This is set to one of the following, in order:* The
license_key_contents
set in--extra-config
. * The contents of the path given with--license-key
. * The contents of the path set in theDCOS_LICENSE_KEY_PATH
environment variable.If none of these are set,
license_key_contents
is not given.
dcos-docker create [OPTIONS] ARTIFACT
Options
-
--docker-version
<docker_version>
¶ The Docker version to install on the nodes. [default: 1.13.1]
-
--linux-distribution
<linux_distribution>
¶ The Linux distribution to use on the nodes. [default: centos-7]
-
--docker-storage-driver
<docker_storage_driver>
¶ The storage driver to use for Docker in Docker. By default this uses the host’s driver.
-
--masters
<masters>
¶ The number of master nodes. [default: 1]
-
--agents
<agents>
¶ The number of agent nodes. [default: 1]
-
--public-agents
<public_agents>
¶ The number of public agent nodes. [default: 1]
-
--extra-config
<extra_config>
¶ The path to a file including DC/OS configuration YAML. The contents of this file will be added to add to a default configuration.
-
--security-mode
<security_mode>
¶ The security mode to use for a DC/OS Enterprise cluster. This overrides any security mode set in
--extra-config
.
-
-c
,
--cluster-id
<cluster_id>
¶ A unique identifier for the cluster. Defaults to a random value. Use the value “default” to use this cluster for other
-
--license-key
<license_key>
¶ This is ignored if using open source DC/OS. If using DC/OS Enterprise, this defaults to the value of the DCOS_LICENSE_KEY_PATH environment variable.
-
--genconf-dir
<genconf_dir>
¶ Path to a directory that contains additional files for DC/OS installer. All files from this directory will be copied to the genconf directory before running DC/OS installer.
-
--copy-to-master
<copy_to_master>
¶ Files to copy to master nodes before installing DC/OS. This option can be given multiple times. Each option should be in the format /absolute/local/path:/remote/path.
-
--workspace-dir
<workspace_dir>
¶ Creating a cluster can use approximately 2 GB of temporary storage. Set this option to use a custom “workspace” for this temporary storage. See https://docs.python.org/3/library/tempfile.html#tempfile.gettempdir for details on the temporary directory location if this option is not set.
-
--custom-volume
<custom_volume>
¶ Bind mount a volume on all cluster node containers. See https://docs.docker.com/engine/reference/run/#volume-shared-filesystems for the syntax to use.
-
--custom-master-volume
<custom_master_volume>
¶ Bind mount a volume on all cluster master node containers. See https://docs.docker.com/engine/reference/run/#volume-shared-filesystems for the syntax to use.
-
--custom-agent-volume
<custom_agent_volume>
¶ Bind mount a volume on all cluster agent node containers. See https://docs.docker.com/engine/reference/run/#volume-shared-filesystems for the syntax to use.
-
--custom-public-agent-volume
<custom_public_agent_volume>
¶ Bind mount a volume on all cluster public agent node containers. See https://docs.docker.com/engine/reference/run/#volume-shared-filesystems for the syntax to use.
-
--variant
<variant>
¶ Choose the DC/OS variant. If the variant does not match the variant of the given artifact, an error will occur. Using “auto” finds the variant from the artifact. Finding the variant from the artifact takes some time and so using another option is a performance optimization.
Arguments
-
ARTIFACT
¶
Required argument
Environment variables
-
DCOS_LICENSE_KEY_PATH
¶ Provide a default for
--license-key
dcos-docker wait¶
Wait for DC/OS to start.
dcos-docker wait [OPTIONS]
Options
-
-c
,
--cluster-id
<cluster_id>
¶ If not given, “default” is used.
-
--superuser-username
<superuser_username>
¶ The superuser username is needed only on DC/OS Enterprise clusters. By default, on a DC/OS Enterprise cluster, admin is used.
-
--superuser-password
<superuser_password>
¶ The superuser password is needed only on DC/OS Enterprise clusters. By default, on a DC/OS Enterprise cluster, admin is used.
dcos-docker run¶
Run an arbitrary command on a node.
This command sets up the environment so that pytest
can be run.
For example, run
dcos-docker run --cluster-id 1231599 pytest -k test_tls.py
.
Or, with sync:
dcos-docker run --sync-dir . --cluster-id 1231599 pytest -k test_tls.py
.
To use special characters such as single quotes in your command, wrap the whole command in double quotes.
dcos-docker run [OPTIONS] NODE_ARGS...
Options
-
-c
,
--cluster-id
<cluster_id>
¶ If not given, “default” is used.
-
--dcos-login-uname
<dcos_login_uname>
¶ The username to set the
DCOS_LOGIN_UNAME
environment variable to.
-
--dcos-login-pw
<dcos_login_pw>
¶ The password to set the
DCOS_LOGIN_PW
environment variable to.
-
--sync-dir
<sync_dir>
¶ The path to a DC/OS checkout. Part of this checkout will be synced before the command is run.
-
--no-test-env
¶
With this flag set, no environment variables are set and the command is run in the home directory.
-
--node
<node>
¶ A reference to a particular node to run the command on. This can be one of: The node’s IP address, the node’s Docker container name, the node’s Docker container ID, a reference in the format “<role>_<number>”. These details be seen with
dcos_docker inspect
.
-
--env
<env>
¶ Set environment variables in the format “<KEY>=<VALUE>”
Arguments
-
NODE_ARGS
¶
Required argument(s)
dcos-docker inspect¶
Show cluster details.
To quickly get environment variables to use with Docker tooling, use the
--env
flag.
Run eval $(dcos-docker inspect <CLUSTER_ID> --env)
, then run
docker exec -it $MASTER_0
to enter the first master, for example.
dcos-docker inspect [OPTIONS]
Options
-
-c
,
--cluster-id
<cluster_id>
¶ If not given, “default” is used.
-
--env
¶
Show details in an environment variable format to eval.
dcos-docker sync¶
Sync files from a DC/OS checkout to master nodes.
This syncs integration test files and bootstrap files.
DCOS_CHECKOUT_DIR
should be set to the path of clone of an open source
DC/OS or DC/OS Enterprise repository.
By default the DCOS_CHECKOUT_DIR
argument is set to the value of the
DCOS_CHECKOUT_DIR
environment variable.
If no DCOS_CHECKOUT_DIR
is given, the current working directory is
used.
dcos-docker sync [OPTIONS] [DCOS_CHECKOUT_DIR]
Options
-
-c
,
--cluster-id
<cluster_id>
¶ If not given, “default” is used.
Arguments
-
DCOS_CHECKOUT_DIR
¶
Optional argument
Environment variables
-
DCOS_CHECKOUT_DIR
¶ Provide a default for
DCOS_CHECKOUT_DIR
dcos-docker destroy¶
Destroy a cluster.
dcos-docker destroy [OPTIONS]
Options
-
-c
,
--cluster-id
<cluster_id>
¶ If not given, “default” is used.
dcos-docker destroy-list¶
Destroy clusters.
To destroy all clusters, run dcos-docker destroy $(dcos-docker list)
.
dcos-docker destroy-list [OPTIONS] [CLUSTER_IDS]...
Arguments
-
CLUSTER_IDS
¶
Optional argument(s)
dcos-docker doctor¶
Diagnose common issues which stop DC/OS E2E from working correctly.
dcos-docker doctor [OPTIONS]
dcos-docker web¶
Open the browser at the web UI.
Note that the web UI may not be available at first.
Consider using dcos-docker wait
before running this command.
dcos-docker web [OPTIONS]
Options
-
-c
,
--cluster-id
<cluster_id>
¶ If not given, “default” is used.
dcos-docker setup-mac-network¶
Set up a network to connect to nodes on macOS.
This creates an OpenVPN configuration file and describes how to use it.
dcos-docker setup-mac-network [OPTIONS]
Options
-
--force
¶
Overwrite any files and destroy conflicting containers from previous uses of this command.
-
--configuration-dst
<configuration_dst>
¶ The location to create an OpenVPN configuration file. [default: ~/Documents/docker-for-mac.ovpn]
dcos-docker destroy-mac-network¶
Destroy containers created by “dcos-docker setup-mac-network”.
dcos-docker destroy-mac-network [OPTIONS]
Versioning, Support and API Stability¶
DC/OS E2E aims to work with DC/OS OSS and DC/OS Enterprise master
branches.
These are moving targets.
For this reason, CalVer is used as a date at which the repository is last known to have worked with DC/OS OSS and DC/OS Enterprise is the main versioning use.
As well as master
, DC/OS E2E supports the following versions of DC/OS:
- DC/OS 1.11
- DC/OS 1.10
- DC/OS 1.9 (limited support, see DC/OS 1.9 and below)
Other versions may work but are not tested.
See GitHub for releases.
There is no guarantee of API stability at this point. All backwards incompatible changes will be documented in the Changelog.
DC/OS 1.9 and below¶
Installers for DC/OS 1.9 and below require a version of sed
that is not compatible with the BSD sed that ships with macOS.
dcos-docker doctor includes a check for compatible sed
versions.
Some Backends support installing DC/OS from a local path (install_dcos_from_path
).
Some Backends support installing DC/OS from a URL (install_dcos_from_url
).
To use these versions of DC/OS with macOS and install_dcos_from_path
, we can either modify the installer or modify the local version of sed
.
Modify the installer¶
The following command replaces an installer named dcos_generate_config.sh
with a slightly different installer that works with the default sed
on macOS.
sed \
-e 'H;1h;$!d;x' \
-e "s/sed '0,/sed '1,/" \
dcos_generate_config.sh > dcos_generate_config.sh.bak
mv dcos_generate_config.sh.bak dcos_generate_config.sh
Contributing¶
Contributions to this repository must pass tests and linting.
Contents
Install Contribution Dependencies¶
On Ubuntu, install system requirements:
apt install -y gcc python3-dev
Install dependencies in a virtual environment.
pip3 install --editable .[dev]
Optionally install the following tools for linting and interacting with Travis CI:
gem install travis --no-rdoc --no-ri
Spell checking requires enchant
.
This can be installed on macOS, for example, with Homebrew:
brew install enchant
and on Ubuntu with apt
:
apt install -y enchant
Linting Bash requires shellcheck: This can be installed on macOS, for example, with Homebrew:
brew install shellcheck
and on Ubuntu with apt
:
apt-get install -y shellcheck
Linting¶
Install Contribution Dependencies.
Run lint tools:
make lint
These can be run in parallel with:
make lint --jobs --output-sync=target
To fix some lint errors, run the following:
make fix-lint
Tests for this package¶
Some tests require the Docker backend and some tests require the AWS backend. See the Docker backend documentation for details of what is needed for the Docker backend. See the AWS backend documentation for details of what is needed for the AWS backend.
Download dependencies which are used by the tests:
make download-artifacts
or, to additionally download a DC/OS Enterprise artifact, run the following:
make EE_ARTIFACT_URL=<http://...> download-artifacts
The DC/OS Enterprise artifact is required for some tests.
A license key is required for some tests:
cp /path/to/license-key.txt /tmp/license-key.txt
Run pytest
:
pytest
To run the tests concurrently, use pytest-xdist. For example:
pytest -n 2
Documentation¶
Run the following commands to build and open the documentation:
make docs
make open-docs
CI¶
Linting and some tests are run on Travis CI.
See .travis.yml
for details on the limitations.
To check if a new change works on CI, unfortunately it is necessary to change .travis.yml
to run the desired tests.
Most of the CLI functionality is not covered by automated tests. Changes should take this into consideration.
Rotating license keys¶
DC/OS Enterprise requires a license key. Mesosphere uses license keys internally for testing, and these expire regularly. A license key is encrypted and used by the Travis CI tests.
To update this link use the following command, after setting the LICENSE_KEY_CONTENTS
environment variable.
This command will affect all builds and not just the current branch.
We do not use encrypted secret files in case the contents are shown in the logs.
We do not add an encrypted environment variable to .travis.yml
because the license is too large.
travis env set --repo mesosphere/dcos-e2e LICENSE_KEY_CONTENTS $LICENSE_KEY_CONTENTS
Updating the DC/OS Enterprise build artifact links¶
Private links to DC/OS Enterprise artifacts are used by Travis CI.
To update these links use the following commands, after setting the following environment variables:
EE_MASTER_ARTIFACT_URL
EE_1_9_ARTIFACT_URL
EE_1_10_ARTIFACT_URL
EE_1_11_ARTIFACT_URL
travis env set --repo mesosphere/dcos-e2e EE_MASTER_ARTIFACT_URL $EE_MASTER_ARTIFACT_URL
travis env set --repo mesosphere/dcos-e2e EE_1_9_ARTIFACT_URL $EE_1_9_ARTIFACT_URL
travis env set --repo mesosphere/dcos-e2e EE_1_10_ARTIFACT_URL $EE_1_10_ARTIFACT_URL
travis env set --repo mesosphere/dcos-e2e EE_1_11_ARTIFACT_URL $EE_1_11_ARTIFACT_URL
Updating the Amazon Web Services credentials¶
Private credentials for Amazon Web Services are used by Travis CI.
To update the credentials use the following commands, after setting the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
travis env set --repo mesosphere/dcos-e2e AWS_ACCESS_KEY_ID $AWS_ACCESS_KEY_ID
travis env set --repo mesosphere/dcos-e2e AWS_SECRET_ACCESS_KEY $AWS_SECRET_ACCESS_KEY
Currently credentials are taken from the OneLogin Secure Notes note dcos-e2e integration testing AWS credentials
.
Parallel builders¶
Travis CI has a maximum test run time of 50 minutes. In order to avoid this and to see failures faster, we run multiple builds per commit. We run almost one builder per test. Some tests are grouped as they can run quickly.
Goals¶
Avoid flakiness¶
For timeouts, err on the side of a much longer timeout than necessary.
Do not access the web while running tests.
Parallelizable Tests¶
The tests in this repository and using this harness are slow. This harness must not get in the way of parallelization efforts.
Logging¶
End to end tests are notoriously difficult to get meaning from. To help with this, an “excessive logging” policy is used here.
Robustness¶
Narrowing down bugs from end to end tests is hard enough without dealing with the framework’s bugs. This repository aims to maintain high standards in terms of coding quality and quality enforcement by CI is part of that.
Version Policy¶
This repository aims to work with DC/OS OSS and DC/OS Enterprise master
branches.
These are moving targets.
For this reason, CalVer is used as a date at which the repository is last known to have worked with DC/OS OSS and DC/OS Enterprise is the main versioning use.
Updating DC/OS Test Utils and DC/OS Launch¶
DC/OS Test Utils and DC/OS Launch are vendored in this repository. To update DC/OS Test Utils or DC/OS Launch:
Update the SHAs in admin/update_vendored_packages.py
.
The following creates a commit with changes to the vendored packages:
admin/update_vendored_packages.sh
Testing the Homebrew Recipe¶
Install Homebrew or Linuxbrew.
brew install dcosdocker.rb
brew audit dcosdocker
brew test dcosdocker
Changelog¶
Contents
- Changelog
- Next
- 2018.05.24.2
- 2018.05.24.1
- 2018.05.21.0
- 2018.05.17.0
- 2018.05.15.0
- 2018.05.14.0
- 2018.05.10.0
- 2018.05.02.0
- 2018.04.30.2
- 2018.04.29.0
- 2018.04.25.0
- 2018.04.19.0
- 2018.04.11.0
- 2018.04.02.1
- 2018.04.02.0
- 2018.03.26.0
- 2018.03.07.0
- 2018.02.28.0
- 2018.02.27.0
- 2018.02.23.0
- 2018.01.25.0
- 2018.01.22.0
- 2017.12.11.0
- 2017.12.08.0
- 2017.11.29.0
- 2017.11.21.0
- 2017.11.15.0
- 2017.11.14.0
- 2017.11.02.0
- 2017.10.04.0
- 2017.08.11.0
- 2017.08.08.0
- 2017.08.05.0
- 2017.06.23.0
- 2017.06.22.0
- 2017.06.21.1
- 2017.06.21.0
- 2017.06.20.0
- 2017.06.19.0
- 2017.06.15.0
2018.05.24.2¶
- Add
--env
option todcos-docker run
.
2018.05.24.1¶
- Make
xfs_info
available on nodes, meaning that preflight checks can be run on nodes with XFS. - Fix
dcos-docker doctor
for cases wheredf
produces very long results.
2018.05.21.0¶
- Show a formatted error rather than a traceback if Docker cannot be connected to.
- Custom backends’ must now implement a
base_config
method. - Custom backends’ installation methods must now take
dcos_config
rather thanextra_config
. Cluster.install_dcos_from_url
andCluster.install_dcos_from_path
now takedcos_config
rather thanextra_config
.
2018.05.17.0¶
- Add a
--variant
option todcos-docker create
to speed up cluster creation.
2018.05.15.0¶
- Add a
test_host
parameter toCluster.run_integration_tests
. - Add the ability to specify a node to use for
dcos-docker run
.
2018.05.14.0¶
- Show IP address in
dcos-docker inspect
.
2018.05.10.0¶
- Expose the SSH key location in
dcos-docker inspect
. - Make network created by
setup-mac-network
now survives restarts.
2018.05.02.0¶
- Previously not all volumes were destroyed when destroying a cluster from the CLI or with the
Docker
backend. This has been resolved. To remove dangling volumes from previous versions, usedocker volume prune
. - Backwards incompatible change:
mount
parameters toDocker.__init__
now take alist
ofdocker.types.Mount
s. - Docker version 17.06 or later is now required for the CLI and for the
Docker
backend.
2018.04.30.2¶
- Added
dcos-docker destroy-mac-network
command. - Added a
--force
parameter todcos-docker setup-mac-network
to override files and containers.
2018.04.29.0¶
- Added
dcos-docker setup-mac-network
command.
2018.04.25.0¶
- Logs from dependencies are no longer emitted.
- The
dcos-docker
CLI now gives more feedback to let you know that things are happening.
2018.04.19.0¶
- The AWS backend now supports DC/OS 1.9.
- The Docker backend now supports having custom mounts which apply to all nodes.
- Add
custom-volume
parameter (and similar for each node type) todcos-docker create
.
2018.04.11.0¶
- Add an AWS backend to the library.
- Add ability to control which labels are added to particular node types on the
Docker
backend. - Add support for Ubuntu on the
Docker
backend.
2018.04.02.1¶
- Add a new
dcos-docker doctor
check for suitablesed
for DC/OS 1.9. - Support
cluster.run_integration_tests
on DC/OS 1.9.
2018.04.02.0¶
- Add support for DC/OS 1.9 on Linux hosts.
dcos-docker doctor
returns a status code of1
if there are any errors.- Add a new
dcos-docker doctor
check for free space in the Docker root directory.
2018.03.26.0¶
- Add a
dcos-docker doctor
check that a supported storage driver is available. - Fix error with using Docker version v17.12.1-ce inside Docker nodes.
- Fix race condition between installing DC/OS and SSH starting.
- Remove support for Ubuntu on the Docker backend.
2018.03.07.0¶
- Fix public agents on DC/OS 1.10.
- Remove options to use Fedora and Debian in the
Docker
backend nodes. - Fix the Ubuntu distribution on the
Docker
backend. - Add support for Docker
17.12.1-ce
on nodes in theDocker
backend. - Exceptions in
create
in the CLI point towards thedoctor
command. - Removed a race condition in the
doctor
command. dcos-docker run
now exits with the return code of the command run.dcos-docker destroy-list
is a new command anddcos-docker destroy
now adheres to the common semantics of the CLI.
2018.02.28.0¶
- Add
Vagrantfile
to run DC/OS E2E in a virtual machine. - Add instructions for running DC/OS E2E on Windows.
- Allow relative paths for the build artifact.
2018.02.27.0¶
- Backwards incompatible change: Move
default_ssh_user
parameter fromCluster
toNode
. Thedefault_ssh_user
is now used forNode.run
,Node.popen
andNode.send_file
ifuser
is not supplied.
2018.02.23.0¶
- Add
linux_distribution
parameter to theDocker
backend. - Add support for CoreOS in the
Docker
backend. - Add
docker_version
parameter to theDocker
backend. - The fallback Docker storage driver for the
Docker
backend is nowaufs
. - Add
storage_driver
parameter to theDocker
backend. - Add
docker_container_labels
parameter to theDocker
backend. - Logs are now less cluttered with escape characters.
- Documentation is now on Read The Docs.
- Add a Command Line Interface.
- Vendor
dcos_test_utils
so--process-dependency-links
is not needed. - Backwards incompatible change:
Cluter
’sfiles_to_copy_to_installer
argument is now aList
ofTuple
s rather than aDict
. - Add a
tty
option toNode.run
andCluster.run_integration_tests
.
2018.01.25.0¶
- Backwards incompatible change:
Change the default behavior of
Node.run
andNode.popen
to quote arguments, unless a newshell
parameter isTrue
. These methods now behave similarly tosubprocess.run
. - Add custom string representation for
Node
object. - Bump
dcos-test-utils
for better diagnostics reports.
2018.01.22.0¶
- Expose the
public_ip_address
of the SSH connection and theprivate_ip_address
of its DC/OS component onNode
objects. - Bump
dcos-test-utils
for better diagnostics reports.
2017.12.11.0¶
- Replace the extended
wait_for_dcos_ee
timeout with a precedingdcos-diagnostics
check.
2017.12.08.0¶
- Extend
wait_for_dcos_ee
timeout for waiting until the DC/OS CA cert can be fetched.
2017.11.29.0¶
- Backwards incompatible change:
Introduce separate
wait_for_dcos_oss
andwait_for_dcos_ee
methods. Both methods improve the boot process waiting time for the corresponding DC/OS version. - Backwards incompatible change:
run_integration_tests
now requires users to callwait_for_dcos_oss
orwait_for_dcos_ee
beforehand.
2017.11.21.0¶
- Remove
ExistingCluster
backend and replaced it with simplerCluster.from_nodes
method. - Simplified the default configuration for the Docker backend.
Notably this no longer contains a default
superuser_username
orsuperuser_password_hash
. - Support
custom_agent_mounts
andcustom_public_agent_mounts
on the Docker backend.
2017.11.15.0¶
- Remove
destroy_on_error
anddestroy_on_success
fromCluster
. Instead, avoid usingCluster
as a context manager to keep the cluster alive.
2017.11.14.0¶
- Backwards incompatible change: Rename
DCOS_Docker
backend toDocker
backend. - Backwards incompatible change: Replace
generate_config_path
withbuild_artifact
that can either be aPath
or a HTTP(S) URL string. This allows for supporting installation methods that require build artifacts to be downloaded from a HTTP server. - Backwards incompatible change: Remove
run_as_root
. Instead require adefault_ssh_user
for backends torun
commands over SSH on any clusterNode
created with this backend. - Backwards incompatible change: Split the DC/OS installation from the ClusterManager
__init__
procedure. This allows for installing DC/OS afterCluster
creation, and therefore enables decoupling of transferring files ahead of the installation process. - Backwards incompatible change: Explicit distinction of installation methods by providing separate methods for
install_dcos_from_path
andinstall_dcos_from_url
instead of inspecting the type ofbuild_artifact
. - Backwards incompatible change:
log_output_live
is no longer an attribute of theCluster
class. It may now be passed separately as a parameter for each output-generating operation.
2017.11.02.0¶
- Added
Node.send_file
to allow files to be copied to nodes. - Added
custom_master_mounts
to the DC/OS Docker backend. - Backwards incompatible change: Removed
files_to_copy_to_masters
. Instead, usecustom_master_mounts
orNode.send_file
.
2017.10.04.0¶
- Added Apache2 license.
- Repository moved to
https://github.com/dcos/dcos-e2e
. - Added
run
, which is similar torun_as_root
but takes auser
argument. - Added
popen
, which can be used for running commands asynchronously.
2017.08.11.0¶
- Fix bug where
Node
repr
s were put into environment variables rather than IP addresses. This prevented some integration tests from working.
2017.08.08.0¶
- Fixed issue which prevented
files_to_copy_to_installer
from working.
2017.08.05.0¶
- The Enterprise DC/OS integration tests now require environment variables describing the IP addresses of the cluster. Now passes these environment variables.
2017.06.23.0¶
- Wait for 5 minutes after diagnostics check.
2017.06.22.0¶
- Account for the name of
3dt
having changed todcos-diagnostics
.
2017.06.21.1¶
- Support platforms where
$HOME
is set as/root
. Cluster.wait_for_dcos
now waits for CA cert to be available.
2017.06.21.0¶
- Add ability to specify a workspace.
- Fixed issue with DC/OS Docker files not existing in the repository.
2017.06.20.0¶
- Vendor DC/OS Docker so a path is not needed.
- If
log_output_live
is set toTrue
for aCluster
, logs are shown inwait_for_dcos
.
2017.06.19.0¶
- More storage efficient.
- Removed need to tell
Cluster
whether a cluster is an enterprise cluster. - Removed need to tell
Cluster
thesuperuser_password
. - Added ability to set environment variables on remote nodes when running commands.
2017.06.15.0¶
- Initial release.
Release Process¶
Outcomes¶
- A new
git
tag available to install. - An updated Homebrew recipe.
- The new version title in the changelog.
Prerequisites¶
python3
on yourPATH
set to Python 3.5+.virtualenv
.- Push access to this repository.
- Trust that
master
is ready and high enough quality for release. This includes theNext
section inCHANGELOG.rst
being up to date.
Perform a Release¶
Get a GitHub access token:
Follow the GitHub instructions for getting an access token.
Set environment variables to GitHub credentials, e.g.:
export GITHUB_TOKEN=75c72ad718d9c346c13d30ce762f121647b502414
Perform a release:
curl https://raw.githubusercontent.com/dcos/dcos-e2e/master/admin/release.sh | bash