We recommend two methods for testing Wasabi upload speeds for BYOS buckets.  For the most accurate results, use large file sizes, run the test multiple times, and perform the test during business hours to take Wasabi server load into account.


Browser

To test Wasabi upload speed using a browser:


  1. Navigate to the Wasabi console at:

    https://console.wasabisys.com/#/login

  2. Login using the credentials for your BYOS storage account.
  3. If you would like to use a separate bucket for speed testing, click Create Bucket in the upper right corner and create a bucket in the same region as the one used in your Morro system.
  4. Select the bucket.

    select the bucket
  5. Click Upload Files in the upper right corner.

    upload files
  6. Wasabi will not provide transfer speed statistics, so prepare to time the transfer manually.  Select a large file (at least 100 MB or larger if you have a lot of upload bandwidth), then click Start Upload.

    upload files



Command Line with s3cmd

The s3cmd utility can be used to test Wasabi upload speed when a GUI is not available.  It requires Python to be installed, and officially supports Linux and Mac.  It may also run on Windows with Python installed.


On Linux, it may be available in your distribution via the package manager.  For example, in Ubuntu, you can use the following to install s3cmd:


sudo apt update && sudo apt install s3cmd


It can also be downloaded here:


https://s3tools.org/s3cmd


Once it is installed, configure it as follows (custom values in bold).  Settings that can use the default values have been omitted.


user@server:~$ s3cmd --configure

<omitted>

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: <your access key>
Secret Key: <your secret key>
Default Region [US]: 


Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: s3.wasabisys.com


Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket)s.s3.wasabisys.com


<omitted>


To test transfer speed, create a large file (at least 100 MB or larger if you have a lot of upload bandwidth), then run the following command (below example is Linux, 100MB is a large file, test0 is the name of the bucket):


user@server:~/temp$ time s3cmd put 100MB s3://test0
upload: '100MB' -> 's3://test0/100MB'  [part 1 of 7, 15MB] [1 of 1]
 15728640 of 15728640   100% in   11s  1386.92 kB/s  done
upload: '100MB' -> 's3://test0/100MB'  [part 2 of 7, 15MB] [1 of 1]
 15728640 of 15728640   100% in   10s  1412.27 kB/s  done
upload: '100MB' -> 's3://test0/100MB'  [part 3 of 7, 15MB] [1 of 1]
 15728640 of 15728640   100% in   10s  1428.58 kB/s  done
upload: '100MB' -> 's3://test0/100MB'  [part 4 of 7, 15MB] [1 of 1]
 15728640 of 15728640   100% in   10s  1432.02 kB/s  done
upload: '100MB' -> 's3://test0/100MB'  [part 5 of 7, 15MB] [1 of 1]
 15728640 of 15728640   100% in   10s  1427.16 kB/s  done
upload: '100MB' -> 's3://test0/100MB'  [part 6 of 7, 15MB] [1 of 1]
 15728640 of 15728640   100% in   10s  1419.22 kB/s  done
upload: '100MB' -> 's3://test0/100MB'  [part 7 of 7, 10MB] [1 of 1]
 10485760 of 10485760   100% in    7s  1412.09 kB/s  done


real    1m14.399s
user    0m1.409s
sys     0m0.277s


If your system does not have the time command, you can measure the time manually.