首页 > 科技 > Cluster-Fork

Cluster-Fork

2005年10月11日 7点50分 发表评论 阅读评论

Often we want to execute parallel jobs consisting of standard UNIX commands.By “parallel” we mean the same command runs on multiple nodes of the cluster.Rocks provides a simple tools for this purpose called cluster-fork.

By default,cluster-fork uses a simple series of ssh connections to launch the task serially on every compute node in the cluster.
Cluster-fork is smart enough to ignore dead nodes.Usually the job is “blocking”,cluster-fork waits for the job to start on one node before moving to the next.
By using the “–bg” flag you can instruct cluster-fork to start the jobs in the background.

Often you wish to name the nodes your job is started on,this can be done by using SQL statement or by specifying the nodes using a specific shorthand.

1.frontend,use the SQL database on the frontend:

$ cluster-fork --query 
"select name from nodes where name like 'compute-1-%'" [cmd]

2.Second method
requires us to explicitly name each node:

--node=compute-0-%d:0-2  : compute-0-0,compute-0-1,compute-0-2
--node=compute-0-%d;0,2-3  : compute-0-0,compute-0-2,compute-0-3 
分类: 科技 标签:
  1. 本文目前尚无任何评论.
  1. 本文目前尚无任何 trackbacks 和 pingbacks.
您必须在 登录 后才能发布评论.