Discussion of Megahit

Megahit was easy to install and it ran very quickly on large datasets.

We thought it seems like a fine approach for a low-complexity dataset. For my data, though, Megahit assembled 12% of the reads from one of my samples, and only 3% of the coassembly using the default settings. Perhaps a better strategy for a high-complexity dataset would be to normalize k-mers using, for example, diginorm or stacks before running megahit meta-large or even an assembler with more options.

We also discussed other assemblers, and decided that it might be best to pick your assembler based on the dataset in question.

2 thoughts on “Discussion of Megahit

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s