On-demand High Throughput Computing


On-demand computing, simulation, and data analysis

Simply upload your Matlab, Python, C/C++, ... code and click to run


Say goodbye to long wait times

Just change a few lines of code, and get results 1000's of times faster than with a single PC.


Complex computation? No problem.

Processing data, linking to libraries, compiling from source. Everything you do with your local computer can be done in minutes with the cloud.


Hate to manage a fleet of servers?

We totally get it. No difficult API to learn, all you need to do is to upload your code and run.

Key features


High Throughput

Our system automatically parallelizes your computation onto thousands of computing cores in the cloud. You pay the same to get job done, but thousands of times faster.

Easy Onboard

Change no more than 2 lines of your code. Do it yourself with our provided examples, or let us get it done for you. Drag and drop in your data in to our web portal, and your program will be run within minutes. Use the terminal to install more libraries and debug code easily.

Multi-language support

Python, C/C++, Matlab, Julia, R, Octave, Fortran, NodeJS, and etc.

User Interface


Our convenient user interface enables you to simply upload your data and application, and start running it on thousands of computing cores within a few minutes.

Code Examples


Our software automatically parallelizes your code. Here are examples to get you started.


# Monte carlo algorithm with 10000 iterations
for trial_number in range(0, 10000):
  result = run_one_simulation()
  export_result(result, trial_number)



# Monte carlo algorithm with 10000 iterations a a a a a a a a a a a a a
for trial_number in range(0, 10000):
  result = run_one_simulation()
  export_result(result, trial_number)


  
void monte_carlo_simulation() {
  for (int i = 0; i < 10000; i++) {
    float[] result = run_one_simulation();
    export_result(result, i);
  }
}

  
void monte_carlo_simulation() {
  for (int i = 0; i < 10000; i++) {
    int pid = fork();
    if (pid == 0) {
      float[] result = run_one_simulation();
      export_result(result, i);
    }
  }
}

inputs = 1:10000;
results = [];
for i = inputs
    results(i) = run_one_simulation(i);
end

  
inputs = 1:10000;
results = [];
parfor i = inputs
  results(i) = run_one_simulation(i);
end

  
inputs = 1:10000;
result = ones(1, 10000);
for i = 1:10000
  result(i) = run_one_simulation()
endfor

  
if exist('OCTAVE_VERSION') ~= 0
    pkg load parallel
end

inputs = 1:10000;
numCores = nproc();

[result] = pararrayfun (numCores, @run_one_simulation, inputs);


result <- rep(0, times = 10000)
for (trial in 1:10000) {
    result[trial] = run_one_simulation()
}

  
library(parallel)

inputs <- 1:10000

numCores <- detectCores()

mclapply(inputs, run_one_simulation, mc.cores = numCores)


Please contact us if you are interested additional examples.
								

Our Pricing


Pay as you go

$ 0.05 /core hour
Start Free Trial Now

FAQs


You can alwasy contact our support team support@simulation.cloud if you have any equations.

What is High-Throughput Computing (HTC)?

High-throughput computing enables easily parallelizable jobs to be distributed among a large number of servers with the purpose of delivering results much faster than would otherwise have been possible with a single workstation or even a mid-sized cluster. The main idea is to distribute isolated routines among a series of dedicated processors, which may number in the thousands. Each processor computes the solution independently, and sends the result back to the user. High-throughput computing and data analysis reduces analysis that could normally take days to only a matter of minutes.

What can HTC be used for?

High-throughput analysis can be leveraged in all areas of technical computing. For example, researchers can use our high-throughput platform to speed up long-running Python scripts containing large loops. As another example, aerospace engineers can script a parameter space exploration in a matter of minutes, and run thousands of Computational Fluid Dynamics (CFD) simulations in parallel, obtaining results in only a matter of hours. The possibilities are endless.

How is HTC different from batch processing?

Our version of high-throughput computing differs from batch processing in that 1) there is no queue to wait in—jobs are started instantly and 2) our software automatically performs the parallelization for you.

What do I need to implement?

You only need to change a few lines in your code, and your function or application will be distributed among the necessary number of processors. There is essentially no barrier to using our high-throughput analysis platform.

Do I have to use the web-based platform?

No, we are actively developing a Python API that will allow you to securely run high-throughput analysis from your local computer. If you are interested in our API, please contact us.

What is the limit to the amount of parallelism?

There is currently no limit (within reason, e.g. 100,000 processors) to the number of jobs which can be submitted in parallel. The enormous capacity of the cloud allows on-demand access to a vast number of cores.

Applicable to embarrassingly parallel jobs. For high performance parallel jobs, see our cloud-based solvers.