Arm64 Docker support (#1770)

* Rework docker image to add arm64 support

The mythril/myth image now builds for linux/arm64 as well as
linux/amd64. To achieve this, we now use docker buildx to build the
images and handle create a multi-platform image manifest. The build
config is defined in a buildx bake file.

By default it'll build both platforms at once, but you can build just
one by overriding the platform on the command line:

    $ docker buildx bake --set='*.platform=linux/arm64'

The solcx Python package doesn't support downloading solc for arm64, so
the image now includes the svm command-line tool, which does. (svm is
used by foundry to provide solc versions.) Integration with solcx is
not automatic, so currently the image's docker-entrypoint.sh handles
symlinking solc versions from svm into solcx's directory.

In addition to supporting arm64, the image is now quite a bit smaller.
~400M vs 1.3G before.

* Update docker image build script for new image

* Remove the z3-solver pip platform hack

When installing wheels in the Docker image, we previously used an ugly
hack to force pip to install the z3-solver wheel, despite it having
invalid platform metadata. Instead of bodging pip install in this way,
we now fix the z3-solver wheel's metadata after building it, using
`auditwheel addtag` to infer and apply compatible platform metadata,
which allows pip to install the wheel normally.
pull/1771/head
Hal Blackburn 2 years ago committed by GitHub
parent d531d8ba10
commit 7dcefb5b8a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 4
      .dockerignore
  2. 184
      Dockerfile
  3. 52
      docker-bake.hcl
  4. 19
      docker/docker-entrypoint.sh
  5. 17
      docker/sync-svm-solc-versions-with-solcx.sh
  6. 26
      docker_build_and_deploy.sh

@ -0,0 +1,4 @@
/.*
/build
/docker-bake.hcl
/Dockerfile

@ -1,51 +1,143 @@
FROM ubuntu:focal
ARG DEBIAN_FRONTEND=noninteractive
# Space-separated version string without leading 'v' (e.g. "0.4.21 0.4.22")
ARG SOLC
RUN apt-get update \
&& apt-get install -y \
libsqlite3-0 \
libsqlite3-dev \
&& apt-get install -y \
apt-utils \
build-essential \
locales \
python-pip-whl \
python3-pip \
python3-setuptools \
software-properties-common \
&& add-apt-repository -y ppa:ethereum/ethereum \
&& apt-get update \
&& apt-get install -y \
solc \
libssl-dev \
python3-dev \
pandoc \
git \
wget \
&& ln -s /usr/bin/python3 /usr/local/bin/python
COPY ./requirements.txt /opt/mythril/requirements.txt
RUN cd /opt/mythril \
&& pip3 install -r requirements.txt
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.en
ENV LC_ALL en_US.UTF-8
COPY . /opt/mythril
RUN cd /opt/mythril \
&& python setup.py install
# syntax=docker/dockerfile:1
ARG PYTHON_VERSION=3.10
ARG INSTALLED_SOLC_VERSIONS
FROM python:${PYTHON_VERSION:?} AS python-wheel
WORKDIR /wheels
FROM python-wheel AS python-wheel-with-cargo
# Enable cargo sparse-registry to prevent it using large amounts of memory in
# docker builds, and speed up builds by downloading less.
# https://github.com/rust-lang/cargo/issues/10781#issuecomment-1163819998
ENV CARGO_UNSTABLE_SPARSE_REGISTRY=true
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH=/root/.cargo/bin:$PATH
# z3-solver needs to build from src on arm, and it takes a long time, so
# building it in a separate stage helps parallelise the build and helps it stay
# in the build cache.
FROM python-wheel AS python-wheel-z3-solver
RUN pip install auditwheel
RUN --mount=source=requirements.txt,target=/run/requirements.txt \
pip wheel "$(grep z3-solver /run/requirements.txt)"
# The wheel z3-solver builds does not install in arm64 because it generates
# incorrect platform compatibility metadata for arm64 builds. (It uses the
# platform manylinux1_aarch64 but manylinux1 is only defined for x86 systems,
# not arm: https://peps.python.org/pep-0600/#legacy-manylinux-tags). To work
# around this, we use pypa's auditwheel tool to infer and apply a compatible
# platform tag.
RUN ( auditwheel addtag ./z3_solver-* \
# replace incorrect wheel with the re-tagged one
&& rm ./z3_solver-* && mv wheelhouse/z3_solver-* . ) \
# addtag exits with status 1 if no tags need adding, which is fine
|| true
FROM python-wheel-with-cargo AS python-wheel-blake2b
# blake2b-py doesn't publish ARM builds, and also don't publish source packages
# on PyPI (other than the old 0.1.3 version) so we need to build from from a git
# tag. They do publish binaries for linux amd64, but their binaries only support
# certain platform versions and the amd64 python image isn't supported, so we
# have to build from src for that as well.
# Try to get a binary build or a source release on PyPI first, then fall back
# to building from the git repo.
RUN pip wheel 'blake2b-py>=0.2.0,<1' \
|| pip wheel git+https://github.com/ethereum/blake2b-py.git@v0.2.0
FROM python-wheel AS mythril-wheels
# cython is needed to build some wheels, such as cytoolz
RUN pip install cython
RUN --mount=source=requirements.txt,target=/run/requirements.txt \
# ignore blake2b and z3-solver as we've already built them
grep -v -e blake2b -e z3-solver /run/requirements.txt > /tmp/requirements-remaining.txt
RUN pip wheel -r /tmp/requirements-remaining.txt
COPY . /mythril
RUN pip wheel --no-deps /mythril
COPY --from=python-wheel-blake2b /wheels/blake2b* /wheels
COPY --from=python-wheel-z3-solver /wheels/z3_solver* /wheels
# Solidity Compiler Version Manager. This provides cross-platform solc builds.
# It's used by foundry to provide solc. https://github.com/roynalnaruto/svm-rs
FROM python-wheel-with-cargo AS solidity-compiler-version-manager
RUN cargo install svm-rs
# put the binaries somewhere obvious for later stages to use
RUN mkdir -p /svm-rs/bin && cd ~/.cargo/bin/ && cp svm solc /svm-rs/bin/
FROM python:${PYTHON_VERSION:?}-slim AS myth
ARG PYTHON_VERSION
# Space-separated version string without leading 'v' (e.g. "0.4.21 0.4.22")
ARG INSTALLED_SOLC_VERSIONS
COPY --from=solidity-compiler-version-manager /svm-rs/bin/* /usr/local/bin/
RUN --mount=from=mythril-wheels,source=/wheels,target=/wheels \
export PYTHONDONTWRITEBYTECODE=1 && pip install /wheels/*.whl
RUN adduser --disabled-password mythril
USER mythril
WORKDIR /home/mythril
RUN ( [ ! -z "${SOLC}" ] && set -e && for ver in $SOLC; do python -m solc.install v${ver}; done ) || true
# pre-install solc versions
RUN set -x; [ -z "${INSTALLED_SOLC_VERSIONS}" ] || svm install ${INSTALLED_SOLC_VERSIONS}
COPY --chown=mythril:mythril \
./mythril/support/assets/signatures.db \
/home/mythril/.mythril/signatures.db
COPY --chown=root:root --chmod=755 ./docker/docker-entrypoint.sh /
COPY --chown=root:root --chmod=755 \
./docker/sync-svm-solc-versions-with-solcx.sh \
/usr/local/bin/sync-svm-solc-versions-with-solcx
ENTRYPOINT ["/docker-entrypoint.sh"]
# Basic sanity checks to make sure the build is functional
FROM myth AS myth-smoke-test-execution
SHELL ["/bin/bash", "-euo", "pipefail", "-c"]
WORKDIR /smoke-test
COPY --chmod=755 <<"EOT" /smoke-test.sh
#!/usr/bin/env bash
set -x -euo pipefail
# Check solcx knows about svm solc versions
svm install 0.5.0
sync-svm-solc-versions-with-solcx
python -c '
import solcx
print("\n".join(str(v) for v in solcx.get_installed_solc_versions()))
' | grep -P '^0\.5\.0$' || {
echo "solcx did not report svm-installed solc version";
exit 1
}
# Check myth can run
myth version
myth function-to-hash 'function transfer(address _to, uint256 _value) public returns (bool success)'
myth analyze /solidity_examples/timelock.sol > timelock.log || true
grep 'SWC ID: 116' timelock.log || {
error "Failed to detect SWC ID: 116 in timelock.sol";
exit 1
}
# Check that the entrypoint works
[[ $(/docker-entrypoint.sh version) == $(myth version) ]]
[[ $(/docker-entrypoint.sh echo hi) == hi ]]
[[ $(/docker-entrypoint.sh bash -c "printf '>%s<' 'foo bar'") == ">foo bar<" ]]
EOT
RUN --mount=source=./solidity_examples,target=/solidity_examples \
/smoke-test.sh 2>&1 | tee smoke-test.log
COPY ./mythril/support/assets/signatures.db /home/mythril/.mythril/signatures.db
ENTRYPOINT ["/usr/local/bin/myth"]
FROM scratch as myth-smoke-test
COPY --from=myth-smoke-test-execution /smoke-test/* /

@ -0,0 +1,52 @@
variable "REGISTRY" {
default = "docker.io"
}
variable "VERSION" {
default = "dev"
}
variable "PYTHON_VERSION" {
default = "3.10"
}
variable "INSTALLED_SOLC_VERSIONS" {
default = "0.8.19"
}
function "myth-tags" {
params = [NAME]
result = formatlist("${REGISTRY}/${NAME}:%s", split(",", VERSION))
}
group "default" {
targets = ["myth", "myth-smoke-test"]
}
target "_myth-base" {
target = "myth"
args = {
PYTHON_VERSION = PYTHON_VERSION
INSTALLED_SOLC_VERSIONS = INSTALLED_SOLC_VERSIONS
}
platforms = [
"linux/amd64",
"linux/arm64"
]
}
target "myth" {
inherits = ["_myth-base"]
tags = myth-tags("mythril/myth")
}
target "myth-dev" {
inherits = ["_myth-base"]
tags = myth-tags("mythril/myth-dev")
}
target "myth-smoke-test" {
inherits = ["_myth-base"]
target = "myth-smoke-test"
output = ["build/docker/smoke-test"]
}

@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -euo pipefail
# Install extra solc versions if SOLC is set
if [[ ${SOLC:-} != "" ]]; then
read -ra solc_versions <<<"${SOLC:?}"
svm install "${solc_versions[@]}"
fi
# Always sync versions, as the should be at least one solc version installed
# in the base image, and we may be running as root rather than the mythril user.
sync-svm-solc-versions-with-solcx
# By default we run myth with options from arguments we received. But if the
# first argument is a valid program, we execute that instead so that people can
# run other commands without overriding the entrypoint (e.g. bash).
if command -v "${1:-}" > /dev/null; then
exec -- "$@"
fi
exec -- myth "$@"

@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
# Let solcx know about the solc versions installed by svm.
# We do this by symlinking svm's solc binaries into solcx's solc dir.
[[ -e ~/.svm ]] || exit 0
mkdir -p ~/.solcx
readarray -t svm_solc_bins <<<"$(find ~/.svm -type f -name 'solc-*')"
[[ ${svm_solc_bins[0]} != "" ]] || exit 0
for svm_solc in "${svm_solc_bins[@]}"; do
name=$(basename "${svm_solc:?}")
version="${name#"solc-"}" # strip solc- prefix
solcx_solc=~/.solcx/"solc-v${version:?}"
if [[ ! -e $solcx_solc ]]; then
ln -s "${svm_solc:?}" "${solcx_solc:?}"
fi
done

@ -1,23 +1,29 @@
#!/bin/sh
#!/bin/bash
set -eo pipefail
NAME=$1
if [[ ! $NAME =~ ^mythril/myth(-dev)?$ ]];
then
echo "Error: unknown image name: $NAME" >&2
exit 1
fi
if [ ! -z $CIRCLE_TAG ];
then
VERSION=${CIRCLE_TAG#?}
GIT_VERSION=${CIRCLE_TAG#?}
else
VERSION=${CIRCLE_SHA1}
GIT_VERSION=${CIRCLE_SHA1}
fi
VERSION_TAG=${NAME}:${VERSION}
LATEST_TAG=${NAME}:latest
docker build -t ${VERSION_TAG} .
docker tag ${VERSION_TAG} ${LATEST_TAG}
# Build and test all versions of the image. (The result will stay in the cache,
# so the next build should be almost instant.)
docker buildx bake myth-smoke-test
echo "$DOCKERHUB_PASSWORD" | docker login -u $DOCKERHUB_USERNAME --password-stdin
docker push ${VERSION_TAG}
docker push ${LATEST_TAG}
# strip mythril/ from NAME, e.g. myth or myth-dev
BAKE_TARGET="${NAME#mythril/}"
VERSION="${GIT_VERSION:?},latest" docker buildx bake --push "${BAKE_TARGET:?}"

Loading…
Cancel
Save