Explore Workflows
View already parsed workflows here or click here to add your own
| Graph | Name | Retrieved From | View |
|---|---|---|---|
|
|
Add snv and indel bam-readcount files to a vcf
|
Path: definitions/subworkflows/vcf_readcount_annotator.cwl Branch/Commit ID: No_filters_detect_variants |
|
|
|
step-valuefrom4-wf.cwl
|
Path: tests/step-valuefrom4-wf.cwl Branch/Commit ID: master |
|
|
|
QIIME2 Step 1
QIIME2 Import and Demux Step 1 |
Path: packed/qiime2-step1-import-demux.cwl Branch/Commit ID: qiime2-workflow Packed ID: main |
|
|
|
io-any-wf-1.cwl
|
Path: tests/io-any-wf-1.cwl Branch/Commit ID: master |
|
|
|
scRNA-seq pipeline using Salmon and Alevin
|
Path: pipeline.cwl Branch/Commit ID: b9c8e26 |
|
|
|
EMG pipeline's QIIME workflow
Step 1: Set environment PYTHONPATH, QIIME_ROOT, PATH Step 2: Run QIIME script pick_closed_reference_otus.py ${python} ${qiimeDir}/bin/pick_closed_reference_otus.py -i $1 -o $2 -r ${qiimeDir}/gg_13_8_otus/rep_set/97_otus.fasta -t ${qiimeDir}/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt -p ${qiimeDir}/cr_otus_parameters.txt Step 3: Convert new biom format to old biom format (json) ${qiimeDir}/bin/biom convert -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table_json.biom --table-type=\"OTU table\" --to-json Step 4: Convert new biom format to a classic OTU table. ${qiimeDir}/bin/biom convert -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table.txt --to-tsv --header-key taxonomy --table-type \"OTU table\" Step 5: Create otu summary ${qiimeDir}/bin/biom summarize-table -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table_summary.txt Step 6: Move one of the result files mv ${resultDir}/cr_otus/otu_table.biom ${resultDir}/cr_otus/${infileBase}_otu_table_hdf5.biom Step 7: Create a list of observations awk '{print $1}' ${resultDir}/cr_otus/${infileBase}_otu_table.txt | sed '/#/d' > ${resultDir}/cr_otus/${infileBase}_otu_observations.txt Step 8: Create a phylogenetic tree by pruning GreenGenes and keeping observed otus ${python} ${qiimeDir}/bin/filter_tree.py -i ${qiimeDir}/gg_13_8_otus/trees/97_otus.tree -t ${resultDir}/cr_otus/${infileBase}_otu_observations.txt -o ${resultDir}/cr_otus/${infileBase}_pruned.tree |
Path: workflows/qiime-workflow.cwl Branch/Commit ID: 3168316 |
|
|
|
scatter-wf3_v1_0.cwl#main
|
Path: testdata/scatter-wf3_v1_0.cwl Branch/Commit ID: aa13f7bad47e8df2349bdebd163e1830537d7f93 Packed ID: main |
|
|
|
snps_and_indels.cwl
|
Path: workflows/subworkflows/snps_and_indels.cwl Branch/Commit ID: master |
|
|
|
Whole Genome Sequence processing workflow scattered over samples
<p>This is a “real-world” workflow example for processing Next Generation Sequencing (NGS) Whole Genome Sequence (WGS) data.</p> <p>You can learn more and run this workflow yourself by going through the <a href=\"https://doc.arvados.org/main/user/tutorials/wgs-tutorial.html\">Processing Whole Genome Sequences</a> walkthrough in the Arvados user guide.</p> <p>The steps of this workflow include:</p> <ol> <li>Check of fastq quality using FastQC</li> <li>Local alignment using BWA-MEM</li> <li>Variant calling in parallel using GATK Haplotype Caller</li> <li>Generation of an HTML report comparing variants against ClinVar archive</li> </ol> <p>The primary input parameter is the <b>Directory of paired FASTQ files</b>, which should contain paired FASTQ files (suffixed with _1 and _2) to be processed. The workflow scatters over the samples to process them in parallel.</p> <p>The remaining parameters are reference data used by various tools in the pipeline.</p> |
Path: WGS-processing/cwl/wgs-processing-wf.cwl Branch/Commit ID: main |
|
|
|
preprocess_vcf.cwl
This workflow will perform preprocessing steps on VCFs for the OxoG/Variantbam/Annotation workflow. |
Path: preprocess_vcf.cwl Branch/Commit ID: master |
