Explore Workflows

View already parsed workflows here or click here to add your own

Graph Name Retrieved From View
workflow graph allele-vcf-alignreads-se-pe.cwl

Workflow maps FASTQ files from `fastq_files` input into reference genome `reference_star_indices_folder` and insilico generated `insilico_star_indices_folder` genome (concatenated genome for both `strain1` and `strain2` strains). For both genomes STAR is run with `outFilterMultimapNmax` parameter set to 1 to discard all of the multimapped reads. For insilico genome SAM file is generated. Then it's splitted into two SAM files based on strain names and then sorted by coordinates into the BAM format. For reference genome output BAM file from STAR slignment is also coordinate sorted.

https://github.com/datirium/workflows.git

Path: subworkflows/allele-vcf-alignreads-se-pe.cwl

Branch/Commit ID: 6bf56698c6fe6e781723dea32bc922b91ef49cf3

workflow graph DiffBind - Differential Binding Analysis of ChIP-Seq Peak Data

Differential Binding Analysis of ChIP-Seq Peak Data --------------------------------------------------- DiffBind processes ChIP-Seq data enriched for genomic loci where specific protein/DNA binding occurs, including peak sets identified by ChIP-Seq peak callers and aligned sequence read datasets. It is designed to work with multiple peak sets simultaneously, representing different ChIP experiments (antibodies, transcription factor and/or histone marks, experimental conditions, replicates) as well as managing the results of multiple peak callers. For more information please refer to: ------------------------------------- Ross-Innes CS, Stark R, Teschendorff AE, Holmes KA, Ali HR, Dunning MJ, Brown GD, Gojis O, Ellis IO, Green AR, Ali S, Chin S, Palmieri C, Caldas C, Carroll JS (2012). “Differential oestrogen receptor binding is associated with clinical outcome in breast cancer.” Nature, 481, -4.

https://github.com/datirium/workflows.git

Path: workflows/diffbind.cwl

Branch/Commit ID: 1a46cb0e8f973481fe5ae3ae6188a41622c8532e

workflow graph RNA-Seq pipeline paired-end stranded mitochondrial

Slightly changed original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **RNA-Seq** basic analysis for **strand specific pair-end** experiment. An additional steps were added to map data to mitochondrial chromosome only and then merge the output. Experiment files in [FASTQ](http://maq.sourceforge.net/fastq.shtml) format either compressed or not can be used. Current workflow should be used only with the pair-end strand specific RNA-Seq data. It performs the following steps: 1. `STAR` to align reads from input FASTQ file according to the predefined reference indices; generate unsorted BAM file and alignment statistics file 2. `fastx_quality_stats` to analyze input FASTQ file and generate quality statistics file 3. `samtools sort` to generate coordinate sorted BAM(+BAI) file pair from the unsorted BAM file obtained on the step 1 (after running STAR) 5. Generate BigWig file on the base of sorted BAM file 6. Map input FASTQ file to predefined rRNA reference indices using Bowtie to define the level of rRNA contamination; export resulted statistics to file 7. Calculate isoform expression level for the sorted BAM file and GTF/TAB annotation file using `GEEP` reads-counting utility; export results to file

https://github.com/datirium/workflows.git

Path: workflows/rnaseq-pe-dutp-mitochondrial.cwl

Branch/Commit ID: 935a78f1aff757f977de4e3672aefead3b23606b

workflow graph THOR - differential peak calling of ChIP-seq signals with replicates

What is THOR? -------------- THOR is an HMM-based approach to detect and analyze differential peaks in two sets of ChIP-seq data from distinct biological conditions with replicates. THOR performs genomic signal processing, peak calling and p-value calculation in an integrated framework. For more information please refer to: ------------------------------------- Allhoff, M., Sere K., Freitas, J., Zenke, M., Costa, I.G. (2016), Differential Peak Calling of ChIP-seq Signals with Replicates with THOR, Nucleic Acids Research, epub gkw680.

https://github.com/datirium/workflows.git

Path: workflows/rgt-thor.cwl

Branch/Commit ID: 1a46cb0e8f973481fe5ae3ae6188a41622c8532e

workflow graph RNA-Seq pipeline single-read

The original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **RNA-Seq** basic analysis for a **single-read** experiment. A corresponded input [FASTQ](http://maq.sourceforge.net/fastq.shtml) file has to be provided. Current workflow should be used only with the single-read RNA-Seq data. It performs the following steps: 1. Use STAR to align reads from input FASTQ file according to the predefined reference indices; generate unsorted BAM file and alignment statistics file 2. Use fastx_quality_stats to analyze input FASTQ file and generate quality statistics file 3. Use samtools sort to generate coordinate sorted BAM(+BAI) file pair from the unsorted BAM file obtained on the step 1 (after running STAR) 5. Generate BigWig file on the base of sorted BAM file 6. Map input FASTQ file to predefined rRNA reference indices using Bowtie to define the level of rRNA contamination; export resulted statistics to file 7. Calculate isoform expression level for the sorted BAM file and GTF/TAB annotation file using GEEP reads-counting utility; export results to file

https://github.com/datirium/workflows.git

Path: workflows/rnaseq-se.cwl

Branch/Commit ID: e45ab1b9ac5c9b99fdf7b3b1be396dc42c2c9620

workflow graph Trim Galore ATAC-Seq pipeline single-read

This ATAC pipeline is based on original [BioWardrobe's](https://biowardrobe.com) [PubMed ID:26248465](https://www.ncbi.nlm.nih.gov/pubmed/26248465) **ChIP-Seq** basic analysis workflow for a **single-read** experiment with Trim Galore. The pipeline was adapted for ATAC-Seq single-read data analysis by updating genome coverage step. ### Data Analysis Steps SciDAP starts from the .fastq files which most DNA cores and commercial NGS companies return. Starting from raw data allows us to ensure that all experiments have been processed in the same way and simplifies the deposition of data to GEO upon publication. The data can be uploaded from users computer, downloaded directly from an ftp server of the core facility by providing a URL or from GEO by providing SRA accession number. Our current pipelines include the following steps: 1. Trimming the adapters with TrimGalore. This step is particularly important when the reads are long and the fragments are short as in ATAC -resulting in sequencing adapters at the end of read. If adapter is not removed the read will not map. TrimGalore can recognize standard adapters, such as Nexterra/Tn5 adapters. 2. QC 3. (Optional) trimming adapters on 5' or 3' end by the specified number of bases. 4. Mapping reads with BowTie. Only uniquely mapped reads with less than 3 mismatches are used in the downstream analysis. Results are saved as a .bam file. 5. Reads mapping to chromosome M are removed. Since there are many copies of chromosome M in the cell and it is not protected by histones, some ATAC libraries have up to 50% of reads mapping to chrM. We recommend using OMNI-ATAC protocol that reduces chrM reads and provides better specificity. 6. (Optional) Removal of duplicates (reads/pairs of reads mapping to exactly same location). This step is used to remove reads overamplified in PCR. Unfortunately, it may also remove \"good\" reads. We usually do not remove duplicates unless the library is heavily duplicated. Please note that MACS2 will remove 'excessive' duplicates during peak calling ina smart way (those not supported by other nearby reads). 7. Peakcalling by MACS2. (Optionally), it is possible to specify read extension length for MACS2 to use if the length determined automatically is wrong. 8. Generation of BigWig coverage files for display on the browser. Since the cuts by the Tn5 transposome are 9bp apart, we show coverage by 9bp reads rather than fragments as in ChIP-Seq. The coverage shows the number of fragments at each base in the genome normalized to the number of millions of mapped reads. This way the peak of coverage will be located at the most accessible site. ### Details _Trim Galore_ is a wrapper around [Cutadapt](https://github.com/marcelm/cutadapt) and [FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality statistics of the input FASTQ file, reads coverage in a form of BigWig file, peaks calling data in a form of narrowPeak or broadPeak files, islands with the assigned nearest genes and region type, data for average tag density plot (on the base of BAM file). Workflow starts with step *fastx\_quality\_stats* from FASTX-Toolkit to calculate quality statistics for input FASTQ file. At the same time `bowtie` is used to align reads from input FASTQ file to reference genome *bowtie\_aligner*. The output of this step is unsorted SAM file which is being sorted and indexed by `samtools sort` and `samtools index` *samtools\_sort\_index*. Based on workflow’s input parameters indexed and sorted BAM file can be processed by `samtools rmdup` *samtools\_rmdup* to get rid of duplicated reads. If removing duplicates is not required the original input BAM and BAI files return. Otherwise step *samtools\_sort\_index\_after\_rmdup* repeat `samtools sort` and `samtools index` with BAM and BAI files. Right after that `macs2 callpeak` performs peak calling *macs2\_callpeak*. On the base of returned outputs the next step *macs2\_island\_count* calculates the number of islands and estimated fragment size. If the last one is less that 80bp (hardcoded in the workflow) `macs2 callpeak` is rerun again with forced fixed fragment size value (*macs2\_callpeak\_forced*). If at the very beginning it was set in workflow input parameters to force run peak calling with fixed fragment size, this step is skipped and the original peak calling results are saved. In the next step workflow again calculates the number of islands and estimates fragment size (*macs2\_island\_count\_forced*) for the data obtained from *macs2\_callpeak\_forced* step. If the last one was skipped the results from *macs2\_island\_count\_forced* step are equal to the ones obtained from *macs2\_island\_count* step. Next step (*macs2\_stat*) is used to define which of the islands and estimated fragment size should be used in workflow output: either from *macs2\_island\_count* step or from *macs2\_island\_count\_forced* step. If input trigger of this step is set to True it means that *macs2\_callpeak\_forced* step was run and it returned different from *macs2\_callpeak* step results, so *macs2\_stat* step should return [fragments\_new, fragments\_old, islands\_new], if trigger is False the step returns [fragments\_old, fragments\_old, islands\_old], where sufix \"old\" defines results obtained from *macs2\_island\_count* step and sufix \"new\" - from *macs2\_island\_count\_forced* step. The following two steps (*bamtools\_stats* and *bam\_to\_bigwig*) are used to calculate coverage on the base of input BAM file and save it in BigWig format. For that purpose bamtools stats returns the number of mapped reads number which is then used as scaling factor by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one is then being sorted and converted to BigWig format by bedGraphToBigWig tool from UCSC utilities. To adapt the pipeline for ATAC-Seq data analysis we calculate genome coverage using only the first 9 bp from every read. Step *get\_stat* is used to return a text file with statistics in a form of [TOTAL, ALIGNED, SUPRESSED, USED] reads count. Step *island\_intersect* assigns genes and regions to the islands obtained from *macs2\_callpeak\_forced*. Step *average\_tag\_density* is used to calculate data for average tag density plot on the base of BAM file.

https://github.com/datirium/workflows.git

Path: workflows/trim-atacseq-se.cwl

Branch/Commit ID: 4360fb2e778ecee42e5f78f83b78c65ab3a2b1df

workflow graph Generate genome indices for STAR & bowtie

Creates indices for: * [STAR](https://github.com/alexdobin/STAR) v2.5.3a (03/17/2017) PMID: [23104886](https://www.ncbi.nlm.nih.gov/pubmed/23104886) * [bowtie](http://bowtie-bio.sourceforge.net/tutorial.shtml) v1.2.0 (12/30/2016) It performs the following steps: 1. `STAR --runMode genomeGenerate` to generate indices, based on [FASTA](http://zhanglab.ccmb.med.umich.edu/FASTA/) and [GTF](http://mblab.wustl.edu/GTF2.html) input files, returns results as an array of files 2. Outputs indices as [Direcotry](http://www.commonwl.org/v1.0/CommandLineTool.html#Directory) data type 3. Separates *chrNameLength.txt* file from Directory output 4. `bowtie-build` to generate indices requires genome [FASTA](http://zhanglab.ccmb.med.umich.edu/FASTA/) file as input, returns results as a group of main and secondary files

https://github.com/datirium/workflows.git

Path: workflows/genome-indices.cwl

Branch/Commit ID: 1a46cb0e8f973481fe5ae3ae6188a41622c8532e

workflow graph Motif Finding with HOMER with random background regions

Motif Finding with HOMER with random background regions --------------------------------------------------- HOMER contains a novel motif discovery algorithm that was designed for regulatory element analysis in genomics applications (DNA only, no protein). It is a differential motif discovery algorithm, which means that it takes two sets of sequences and tries to identify the regulatory elements that are specifically enriched in on set relative to the other. It uses ZOOPS scoring (zero or one occurrence per sequence) coupled with the hypergeometric enrichment calculations (or binomial) to determine motif enrichment. HOMER also tries its best to account for sequenced bias in the dataset. It was designed with ChIP-Seq and promoter analysis in mind, but can be applied to pretty much any nucleic acids motif finding problem. Here is how we generate background for Motifs Analysis ------------------------------------- 1. Take input file with regions in a form of “chr\" “start\" “end\" 2. Sort and remove duplicates from this regions file 3. Extend each region in 20Kb into both directions 4. Merge all overlapped extended regions 5. Subtract not extended regions from the extended ones 6. Randomly distribute not extended regions within the regions that we got as a result of the previous step 7. Get fasta file from these randomly distributed regions (from the previous step). Use it as background For more information please refer to: ------------------------------------- [Official documentation](http://homer.ucsd.edu/homer/motif/)

https://github.com/datirium/workflows.git

Path: workflows/homer-motif-analysis.cwl

Branch/Commit ID: 1a46cb0e8f973481fe5ae3ae6188a41622c8532e

workflow graph Gene expression merge - combines RPKM gene expression from several experiments

Gene expression merge - combines RPKM gene expression from several experiments =================================================================================== Workflows merges RPKM gene expression from several experiments based on the values from GeneId, Chrom, TxStart, TxEnd and Strand columns. Reported RPKM columns are renamed based on the experiments names.

https://github.com/datirium/workflows.git

Path: workflows/feature-merge.cwl

Branch/Commit ID: 564156a9e1cc7c3679a926c479ba3ae133b1bfd4

workflow graph GAT - Genomic Association Tester

GAT: Genomic Association Tester ============================================== A common question in genomic analysis is whether two sets of genomic intervals overlap significantly. This question arises, for example, in the interpretation of ChIP-Seq or RNA-Seq data. The Genomic Association Tester (GAT) is a tool for computing the significance of overlap between multiple sets of genomic intervals. GAT estimates significance based on simulation. Gat implemements a sampling algorithm. Given a chromosome (workspace) and segments of interest, for example from a ChIP-Seq experiment, gat creates randomized version of the segments of interest falling into the workspace. These sampled segments are then compared to existing genomic annotations. The sampling method is conceptually simple. Randomized samples of the segments of interest are created in a two-step procedure. Firstly, a segment size is selected from to same size distribution as the original segments of interest. Secondly, a random position is assigned to the segment. The sampling stops when exactly the same number of nucleotides have been sampled. To improve the speed of sampling, segment overlap is not resolved until the very end of the sampling procedure. Conflicts are then resolved by randomly removing and re-sampling segments until a covering set has been achieved. Because the size of randomized segments is derived from the observed segment size distribution of the segments of interest, the actual segment sizes in the sampled segments are usually not exactly identical to the ones in the segments of interest. This is in contrast to a sampling method that permutes segment positions within the workspace.

https://github.com/datirium/workflows.git

Path: workflows/gat-run.cwl

Branch/Commit ID: 564156a9e1cc7c3679a926c479ba3ae133b1bfd4