SONY

Cloud Pricing

Deep learning is now available anywhere and anytime, with rich amount of resources on the cloud.
From the rich computing resources such as GPU, you can choose your proper resource according to your purpose.
Usage fee of CPU/GPU will be only charged for the number of seconds you will train or evaluate. While training and evaluation is not executed, CPU/GPU usage fee will not be charged and the cost can be kept to a minimum.
For payment, you can use a credit card and you will be charged on a monthly basis.

Pricing (excluding tax)

Compute Hours
CPU
  • CPU×185yen / hour
GPU Example of gpu usage fee
  • NVIDIA® T4 GPUx1130yen / hour
  • NVIDIA® V100 GPUx1560yen / hour
  • NVIDIA® V100 GPUx42,900yen / hour
  • NVIDIA® V100 GPUx85,800yen / hour
  • NVIDIA® V100 GPU+x87,500yen / hour
Additional Workspace
200 yen / 10GB per Month
Model API publishing function
Request count billing type
  • ~500 request/APIfree
  • 501~3,000 request/API3,000yen / month
Instance occupancy type
  • CPU85yen / hour
  • GPU560yen / hour

* Usage of CPU and GPU is billed by the second.
* The amount of the largest workspace size selected within the month is charged.
* Regarding compute hours, usage fee of CPU/GPU will be only charged for the number of seconds when CPU/GPU is run, and there is no usage fee for CPU/GPU when training or evaluation is not executing, for instance, editing a network.
* There are no charges other than those listed in the above table, for instance, data transfer cost of uploading your dataset.
* See the FAQ for more detailed specification.
* The GPU used in the instance occupancy type of the model API publishing function is NVIDIA® V100 GPU.
* Please see here for details of model API publishing function.

Free Usage

You can get started with Neural Network Console Cloud for free within the following free usage limit.

  • Compute Hours GPU* : 2 hours
  • Workspace : 10GB

* NVIDIA® T4 GPUs are available.

Estimated usage fee for learning

In the Neural Network Console cloud version, the fee varies depending on the type of CPU / GPU used during learning.
An example of using two types of networks with different scales for CPUs and two types of GPUs will be introduced as an estimate of usage fees.

* Sample 1
Dataset: MNIST
Network: LeNet (from sample project)
epoch: 10
Environment
CPU
NVIDIA® T4 GPU
NVIDIA® V100 GPU
Learning execution time
438 seconds
91 seconds
83 seconds
Usage fee per hour
85 yen
130 yen
560 yen
Estimated usage fee
About 11 yen
About 4 yen
About 13 yen

In a small network like this example, the higher performance GPU (NVIDIA® V100 GPU) may not be used efficiently and the learning execution time may be about the same as the lower performance NVIDIA® T4 GPU. In this way, GPU processing performance and usage fees may be reversed depending on the network or data set.

* Sample 2
Dataset: CIFAR 10
Network: ResNet-110
epoch: 300
Environment
CPU
NVIDIA® T4 GPU
NVIDIA® V100 GPU
Learning execution time
2,822,229 seconds
(784 hours)
14,865 seconds
(4.13 hours)
6,067 seconds
(1.69 hours)
Usage fee per hour
85 yen
130 yen
560 yen
Estimated usage fee
About 66,636 yen
About 537 yen
About 944 yen

In networks of this size, GPU performance differences appear in processing time, resulting in faster processing time for NVIDIA® V100 GPUs.
The cloud version of the Neural Network Console can also use multiple GPUs (multi-GPU) to further accelerate learning.

Payment Method

For payment, you can use your credit card.

  • VISA
  • mastercard
  • JCB
  • AMERICAN EXPRESS
  • Diners Club INTERNATIONAL

For corporations, we also have a menu that supports bill payment.
Click here for details.

The Cloud version does not support Internet Explorer.
Please open with Google Chrome or download the Windows version in here.

Sorry, the cloud is temporarily out of service.
Please try again later.

Which account do you sign in with?

Select an account to use with Neural Network Console.
The same function is available with both Google and Sony accounts.