Explore Workflows
View already parsed workflows here or click here to add your own
Graph | Name | Retrieved From | View |
---|---|---|---|
|
EMG pipeline's QIIME workflow
Step 1: Set environment PYTHONPATH, QIIME_ROOT, PATH Step 2: Run QIIME script pick_closed_reference_otus.py ${python} ${qiimeDir}/bin/pick_closed_reference_otus.py -i $1 -o $2 -r ${qiimeDir}/gg_13_8_otus/rep_set/97_otus.fasta -t ${qiimeDir}/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt -p ${qiimeDir}/cr_otus_parameters.txt Step 3: Convert new biom format to old biom format (json) ${qiimeDir}/bin/biom convert -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table_json.biom --table-type=\"OTU table\" --to-json Step 4: Convert new biom format to a classic OTU table. ${qiimeDir}/bin/biom convert -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table.txt --to-tsv --header-key taxonomy --table-type \"OTU table\" Step 5: Create otu summary ${qiimeDir}/bin/biom summarize-table -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table_summary.txt Step 6: Move one of the result files mv ${resultDir}/cr_otus/otu_table.biom ${resultDir}/cr_otus/${infileBase}_otu_table_hdf5.biom Step 7: Create a list of observations awk '{print $1}' ${resultDir}/cr_otus/${infileBase}_otu_table.txt | sed '/#/d' > ${resultDir}/cr_otus/${infileBase}_otu_observations.txt Step 8: Create a phylogenetic tree by pruning GreenGenes and keeping observed otus ${python} ${qiimeDir}/bin/filter_tree.py -i ${qiimeDir}/gg_13_8_otus/trees/97_otus.tree -t ${resultDir}/cr_otus/${infileBase}_otu_observations.txt -o ${resultDir}/cr_otus/${infileBase}_pruned.tree |
![]() Path: workflows/qiime-workflow.cwl Branch/Commit ID: 8e196ab4fc4e06d97a3d943d6bc59b4e970ed129 |
|
|
BLAST against rRNA db
|
![]() Path: bacterial_noncoding/wf_blastn.cwl Branch/Commit ID: 4f4448f71645275db5b84eb551990dfe3bf37cbb |
|
|
Find reads with predicted coding sequences above 60 AA in length
|
![]() Path: workflows/orf_prediction.cwl Branch/Commit ID: 25129f55226dee595ef941edc24d3c44414e0523 |
|
|
star-cufflinks_wf_pe.cwl
|
![]() Path: workflows/star-cufflinks/paired_end/star-cufflinks_wf_pe.cwl Branch/Commit ID: 5781822fdc8e4a7e79e7709a64209bba9acd41e2 |
|
|
strelka workflow
|
![]() Path: definitions/subworkflows/strelka_and_post_processing.cwl Branch/Commit ID: eb0092603bf57acb7bda08a06e4f2f1e2a8c9b6d |
|
|
count-lines10-wf.cwl
|
![]() Path: cwltool/schemas/v1.0/v1.0/count-lines10-wf.cwl Branch/Commit ID: 7bfe73a708dbf31d037303bb5a8fed1a79984b0f |
|
|
PCA - Principal Component Analysis
Principal Component Analysis --------------- Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. The calculation is done by a singular value decomposition of the (centered and possibly scaled) data matrix, not by using eigen on the covariance matrix. This is generally the preferred method for numerical accuracy. |
![]() Path: workflows/pca.cwl Branch/Commit ID: d1bef74924efcb8bfaa00987b3f148d5a192b7a9 |
|
|
Data2Services CWL workflow to convert CSV/TSV files with statements split, Vincent Emonet <vincent.emonet@gmail.com>
|
![]() Path: support/aynec-fb13-a/virtuoso-workflow/workflow.cwl Branch/Commit ID: ffc33a6db55c50d1dd7b766b687e448249820df9 |
|
|
output-arrays-file-wf.cwl
|
![]() Path: v1.0/v1.0/output-arrays-file-wf.cwl Branch/Commit ID: 40fcfc01812046f012acf5153cc955ee848e69e3 |
|
|
varscan somatic workflow
|
![]() Path: definitions/subworkflows/varscan.cwl Branch/Commit ID: ffd73951157c61c1581d346628d75b61cdd04141 |