Explore Workflows
View already parsed workflows here or click here to add your own
| Graph | Name | Retrieved From | View |
|---|---|---|---|
|
|
scRNA-seq pipeline using Salmon and Alevin
|
Path: pipeline.cwl Branch/Commit ID: 85892d9 |
|
|
|
EMG assembly for paired end Illumina
|
Path: workflows/emg-assembly.cwl Branch/Commit ID: f993cad |
|
|
|
wf_trim_partial_and_map_se.cwl
This workflow takes in appropriate trimming params and demultiplexed reads, and performs the following steps in order: trimx1, trimx2, fastq-sort, filter repeat elements, fastq-sort, genomic mapping, sort alignment, index alignment, namesort, PCR dedup, sort alignment, index alignment |
Path: cwl/wf_trim_partial_and_map_se.cwl Branch/Commit ID: master |
|
|
|
EMG pipeline's QIIME workflow
Step 1: Set environment PYTHONPATH, QIIME_ROOT, PATH Step 2: Run QIIME script pick_closed_reference_otus.py ${python} ${qiimeDir}/bin/pick_closed_reference_otus.py -i $1 -o $2 -r ${qiimeDir}/gg_13_8_otus/rep_set/97_otus.fasta -t ${qiimeDir}/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt -p ${qiimeDir}/cr_otus_parameters.txt Step 3: Convert new biom format to old biom format (json) ${qiimeDir}/bin/biom convert -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table_json.biom --table-type=\"OTU table\" --to-json Step 4: Convert new biom format to a classic OTU table. ${qiimeDir}/bin/biom convert -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table.txt --to-tsv --header-key taxonomy --table-type \"OTU table\" Step 5: Create otu summary ${qiimeDir}/bin/biom summarize-table -i ${resultDir}/cr_otus/otu_table.biom -o ${resultDir}/cr_otus/${infileBase}_otu_table_summary.txt Step 6: Move one of the result files mv ${resultDir}/cr_otus/otu_table.biom ${resultDir}/cr_otus/${infileBase}_otu_table_hdf5.biom Step 7: Create a list of observations awk '{print $1}' ${resultDir}/cr_otus/${infileBase}_otu_table.txt | sed '/#/d' > ${resultDir}/cr_otus/${infileBase}_otu_observations.txt Step 8: Create a phylogenetic tree by pruning GreenGenes and keeping observed otus ${python} ${qiimeDir}/bin/filter_tree.py -i ${qiimeDir}/gg_13_8_otus/trees/97_otus.tree -t ${resultDir}/cr_otus/${infileBase}_otu_observations.txt -o ${resultDir}/cr_otus/${infileBase}_pruned.tree |
Path: workflows/qiime-workflow.cwl Branch/Commit ID: 708fd97 |
|
|
|
5S-from-tablehits.cwl
|
Path: tools/5S-from-tablehits.cwl Branch/Commit ID: c211071 |
|
|
|
pipeline.cwl
|
Path: pipeline.cwl Branch/Commit ID: master |
|
|
|
heatmap-prepare.cwl
Workflow runs homer-make-tag-directory.cwl tool using scatter for the following inputs - bam_file - fragment_size - total_reads `dotproduct` is used as a `scatterMethod`, so one element will be taken from each array to construct each job: 1) bam_file[0] fragment_size[0] total_reads[0] 2) bam_file[1] fragment_size[1] total_reads[1] ... N) bam_file[N] fragment_size[N] total_reads[N] `bam_file`, `fragment_size` and `total_reads` arrays should have the identical order. |
Path: tools/heatmap-prepare.cwl Branch/Commit ID: master |
|
|
|
wf_calculate_Models.cwl
|
Path: yw_cwl_modeling/yw2cwl_parser/example_sql/paleocar_models/wf_calculate_Models.cwl Branch/Commit ID: master |
|
|
|
protein similarities
run diamond on mutlple DBs and merge-sort results |
Path: CWL/Workflows/protein-diamond.workflow.cwl Branch/Commit ID: master |
|
|
|
variant-calling-pair.cwl
|
Path: modules/pair/variant-calling-pair.cwl Branch/Commit ID: master |
