With AWS Snowball, yes we can…

I remember the first day in networking class when our teacher told us “do not underestimate the bandwidth of a truck full of CDs”. That’s what someone should have thought about AWS when they launched their Snowball service.

The truth is that I had heard my consultant workmates talking about the Snowball solution many times, although until recently I have not been able to prove the existence of this mythical artifact.

Recently, we worked in a project for the complete migration of digital file systems to AWS. It is a very interesting project by which the client can guarantee the availability of videos, news, photographs and other content for the access of its users, providing them with an always-on platform and with the lowest cost.

The problem started when we calculated the time it would take to pass the 150TB (terabytes) of information and the impact this increase would have, penalizing the bandwidth that this company had contracted. Starting from the 150 TB, this means 1,200,000 Gb (gigabits) and taking into account that the client has a line of 1 Gbps (gigabit per second), the calculation gave us that the retransmission coefficient and error management was 40% , and we would need 23.15 days in which the client would be unable to do anything else with his internet access than uploading all its digital file … That is, the project was not viable.

Returning to one of the questions on the AWS certification exam, I remembered that it was time to prove the existence of the famous Snowball. I got down to work:

I entered the AWS portal and searching Snowball we easily found the service:

Once we are in, we proceed to configure it. We have 3 solutions, one to import data to AWS, other to export them and another one that allows us to have a hybrid environment. In our case, it will be the first one:

The next step is to enter the address to which we want the Snowball to arrive and the type of shipment: the standard one, which takes between 3-7 days or the Express one that takes 1-2 days.

Now we must choose the type of Snowball we want to get. When I did it, there were only a couple of options: 80TB and 100TB. However, Snowball optimized very recently, as we can see in the following screenshot, but also now we have the ability to load AMIs (virtual images of AWS machines) in the Snowball with the ability to have a portable datacenter. This will be discussed in subsequent posts… but let’s focus on the import.

Once the Snowball type has been selected, in our case the normal one of 80TB, we also have to select the S3 bucket where we want our data to end.

We turn to the next step: our data security. First, we have to create an IAM role, to give Snowball permissions on our resources, for example, upload files to a specific bucket.

We also have to create a KMS key to encrypt our data while it is stored in the Snowball:

Once we have configured this, we will choose if we want to receive notifications by mail or SMS, we check that everything is fine and we ask for our device.

On the next screen, we can download our manifest and we can see our password to later start the copy of the data.

It took 3 days to arrive, but it was as simple as receiving the orders from Amazon in the office like Iván does every day in Enimbos. They delivered it in the office, without any boxes, and with all the necessary cables for their connection , both traditional RJ45 and fiber.

When we received it, I went to the datacenter with a couple of colleagues to connect it. The first thing to do is configure the Snowball network. We have to connect it to a switch and obtain the IP by DHCP or we can also configure it manually.

We need to load the manifest and the password in our server, so we will be able to establish an encrypted connection with the Snowball, and can we start with the transmission of data.

If we are familiar with the AWS CLI, for transmissions to S3, it will be very simple, this is the snowball command: snowball cp [OPTION …] SRC … s3: // DEST

For example, to copy all our NFS to Snowball, we use the following command: snowball cp –recursive / NFS s3: // lab-enimbos

We start copying wit the Snowball and the progress of the copy will appear on the screen.

When the copy is finished, all we have to do is call the messenger to come for it and he will know where to take it because the shipping label will be on the screen.

We are in the era of youtubers and influencers, of the digital content, which is getting higher quality and size. It is a challenge to store and share these contents quickly and easily.

AWS is expert in providing solutions for the biggest challenges like uploading 50TB of videos in a record time.

It has an ultra-fast connection of 10Gbs to transfer several TB in a couple of days counting the delivery time, one million times the monthly data that you can transfer with your typical 4G data rate and several layers of encryption so that your data travels safe to your destination

They also think about the durability so it is very resistant and portable, you can even kick it because it stands accelerations up to 6G.

At Enimbos we are experts in Cloud solutions, so we use products such as Snowball to transport our customers’ data to the cloud, and if it does not reach you with 100TB, we can always hire the Snowmobile, a truck that is sent to your datacenter with a reinforced container of 13.71 meters in length and is capable of transferring up to 100 PB (petabytes), something that if you had to do it online, it would be impossible because it would take years.

Maybe with our next customer we no longer arrive with a pair of Snowball and we can try the Snowmobile. Who will be the driver?

Related Posts