Battles in Slovakia
micro-pipetting Vitriol
Several people have asked me to review this Achs et al preprint from Slovakia.
Overview- The paper would have more impact if they left their divisive language aside. It was very well funded based on all the Illumina sequencing and qPCR assay development. They have very informative pictorials for their methods but key details are missing. Despite the attempts at a comprehensive rebuttal, no disclosure is made on the source of this funding. The authors are from a Virology Institute which likely explains their unsubstantiated vaccine safety bias. You can see their annual report here. It does disclose work with mRNA vaccine manufacturers like Sensible Biotechnologies Inc. This was not disclosed in their conflict of interests disclosure.
Nevertheless, it is worth reviewing the methods in careful detail as we can still extract information given you can read through the “anti-vax” derision and “mis-information” pejoratives in the text.
Review-
It is odd that that paper opens with such an overt “Saved millions” lie seen below. It invites circular reasoning.
“Since vaccines are safe and effective anyone questioning the safety is driving vaccine hesitancy”.
This is the very attitude that is causing a justified house cleaning at the CDC.
They are likely being very lethargic with this statement and referencing the Watson et al paper which is a model known to be a statistical farse. This was well addressed by Dr. Raphael Lataster.
If you are here to quantitate DNA contamination in the vaccines why is this your opening salvo? It ruins any sense of objectiveness with the reader.
This invites the reader to look for bias in the work and it is not hard to find.
Knowing they have a predetermined goal of showing the vaccines pose no risk, lets look at the Parlor tricks they play to craft this narrative.
Parlor Trick #1)
They start by feeding their qPCR reactions with just a Triton-X treatment. There is no 95C Heat step despite Speicher et al showing how much more DNA is liberated with a TritonX + Heat. Speicher et al RNaseA work was not in the preprint so they can likely claim it wasn’t known at the time of their submission. Speicher et al is currently peer reviewed. The RNaseA with heat has also been published on social median and Substack and the Achs et al. authors are on record chastising this type of mis-information. I assume they are reading what they criticize?
Below in Achs et al you can see there is no heat. They really need a Triton-X dilution curve or the addition of heat to prove that they are liberating all mRNA and DNA from the LNPs. They are going simply on faith that this is adequate when there is public data showing it can change things 5-10 fold. This 5 fold change would put their vials over the obsolete limit. I say obsolete as the 10ng limit is a straw man argument that applied to naked DNA injections. LNPs protect this mRNA and DNA from decay. Without the LNPs, the mRNA would not work. It would be destroyed. They know this but will never speak about how the 10ng/dose limits never adjusted for this.
The Yellow vs the Red Bars are what happens with heat (Speicher et al). Note this is a Log scale and for the samples on the right it changed the values nearly a Log scale. That is not something you can ignore.
Parlor Trick #2)
This one has been described before as Kaiser et al tried to pull the same trick. It can be easily summed up in a meme. It was spelled out in Kammerer et al and on the above substack so there is no reason for it to be left unaddressed in a paper spitting vitriol over this prior work.
As anyone following our work knows by now, when you perform DNA extractions with Phenol:Chloroform and EtOH precipitation, you loose a lot of DNA and you tend to lose exponentially more of the DNA if the DNA is small in length.
Note they spiked in a DNA of known concentration to monitor this loss (so they are aware if it) but they did NOT disclose the size of this spike in DNA. They should spike in a DNA ladder so you can quantitate the loss of the 10bp, 20bp, 30bp, 40bp, 50bp fragments. Spiking in a 8kb fragment will show limited loss. Spiking in a 10bp ladder will give you answers like this.
The Red line is the DNA ladder diluted 10 fold. It is offset (delayed) in electrophoresis as the Internal size standard can be picked up with the Red data. The other conditions have more signal than the lane marker and as a result have not been perfectly mobility corrected to line up with the Red bands. The Green, Blue and Orange lines are on top of each other.
The Green line shows the loss of DNA compared to the Orange and Blue lines.
Their failure to use a 10bp ladder for this is a commonly repeated Parlor Trick (See Kaiser et al). Everyone knows this attribute of DNA preps. Its heavily advertised in kits that perform this type of work.
For example, here is Beckman’s Ampure Kit. It documents how much loss there is for the smaller bands at different Ampure concentrations. This can be a tunable feature to prep DNA and eliminate short PCR primers or other shrapnel nucleic acids you don’t want to persist on your detectors. I’m less forgiving for these type of sophisms as I engineered these types of tools for the human genome project and this is no longer bleeding edge insider knowledge. Its ubiquitously known that DNA prep yields are very DNA size and condition dependent.
They try to assuage readers by using a second DNA prep that suffers from the same problem. This is a kit designed to capture cfDNA which is 170bp+.
Without knowledge of the size of their spike in controls, this is a meaningless yield.
It appears from the manufacturers data on this kit, that they are missing these types of specifications for how this kit captures material under 100bp or if it capture anything at all at those size ranges.
When ever Achs et al do use heat to liberate the DNA, they follow up with an EtOH precip which loses most of the small DNA.
Why on earth are they using 60ul of RNaseA? Most kits recommend using 1ul of this material as its very concentrated (100 ug/ul of protein). Likewise the high amounts of Proteinase K is overkill.
If you want to destroy the yield for a DNA prep kit designed to capture nanograms of DNA, flood it with milligrams-micrograms of protein.
In fact, they claim to be using the Qiagen QIAmp spin columns. The protocol from the manufacturer only uses 4ul of RNaseA so what is the justifiction for swamping this step with so much protein. 60ul of 100mg/ml or 100ug/ul = 6000ug or 6milligrams of RNaseA?
They repeat this with Moderna using even more enzyme?
There are no spike in controls for this process, so its very likely their excessive use of milligrams-micrograms of enzyme is clogging up their DNA preps.
They do attempt to measure the impact of the LNPs on qPCR inhibition but this is Parlor Trick #3)
You cannot measure the impact of inhibitors when you spike them into CT 8-9 samples. The vaccines are coming out 10 CTs later (CT 19) or 1000 fold lower quantities.
The right thing to do is to run a serial dilution on the vaccine and see if the 1X samples are exactly 3.3 CTs ahead of the 1:10 dilutions. Speicher et al found this NOT to be the case so the entire study was conducted at 1:10 dilutions to avoid LNP inhibition of qPCR. This may differ based on the qPCR reagents used so it can’t simply be emulated from Speicher et al and needs to be performed on their exact qPCR conditions.
Likewise the study is making use of 200bp+ Amplicons which will under measure the fragmented DNA. You can see this in their quant variance in the below figure. They admit to this in their Discussion as their 63bp KAN assay gives them the highest signal. The high variance seen in this figure is not something to celebrate. It reinforces what others have published including Moderna. qPCR will never get this answer right as its too dependent on amplicon size and which part of the plasmid you target.
Similar ambiguities about the DNA preps exist in their Illumina Library Prep.
They amplify the library for 10 cycles (in some cases 13 cycles or more). This will enrich for smaller fragments over larger ones. They also use a final SPB clean up on the library. In the prior clean ups with Illumina SPB (solid phase beads) they mention 4X of SPB which should do a good job capturing the small material but they do not disclose the concentration of the SPB for this second purification step. This would be a great way to eliminate certain sizes of DNA. You can see no DNA is seen below 200bp in their Illumina Libraries. We don’t expect much below 120bp as the ILMN adaptors can be this large alone but the size cut off on this prep is likely eliminating the smaller parts of the library. Its not clear to me what their adaptor sizes are. Given the emphasis on small DNA measurements, this should be disclosed so these charts can be brought into proper context. You will note many of the Agilent traces have post amplification fragments out to 1000 bases.
Their Ori 1B primer also do not match the Moderna reference sequence.
Their Kan 1C primers and Probe do not match the Moderna reference sequence.
Primer highlighted in yellow below do not target Moderna.
The supplemental data for this paper is here.
You can see they never report any values for the Moderna SpikeVax for Kan.
Their discussion ironically speaks to all of the prior discussions on this topic NOT being peer reviewed. Konig, Kammerer, Wang and now Speicher are peer reviewed.
But Peer review isn’t as important as replication and many more independent labs have found DNA contamination problems and voiced concerns under severe penalty.
Its possible the vials used in this study (some from 2025) are in fact low and even if they address the multiple Parlor tricks mentioned in this thread, they may still remain low but more vials need surveying given the very high adverse events seen with different lots (Schmeling et al).
As the paper moves into the Discussion section, the authors decide to flex their ideology. Once again the vaccine hesitancy canard is thrown out. A sign that these authors are not honest brokers.
“You cannot question the liability free mandated vaccines or you will spread vaccine hestitancy”. RFKJr is currently dealing with this madness at the CDC and thankfully Retsef Levi isn’t having any of this circular thinking.
The authors extend this bravado with bold claims (“completely disproving”) they do not achieve. I have been on record critiquing the Flemming paper for its SV40 omission. Each of the papers to date have used slightly different methods and they mostly zero in on qPCR being below the limit in most but not all vials but the qubit being over the limit. All papers extend concern over these limits being inappropriate for LNP based transfection. A topic the Achs et al authors never address.
The work performed in this paper doesn’t move the needle as they left too many questions regarding how much DNA they are losing in the purification procedures and they blew a massive amount of money reconfirming their own “saved millions of lives” biases.
The chest pounding continues.
They did not prove Triton X-100 releases all DNA. Speicher et al has shown you need heat and Triton-X and we don’t even claim its 100%. We just get 5-10X more signal as you boil with soap than soap or boiling alone. They did not quantitate their yield loss with a DNA ladder. This is statement is patently false.
The TGA is an irrelevant party in this discussion. This is an appeal to an authority that is on the Pharma payroll (TGA receives large portions of their budget from Pharma) and has no public methods to scrutinize. This section should be removed as nothing the TGA has provided to date can be verified and most of their documents are redacted.
Here they chest pound some more over an assay they failed to design correctly:)
Maybe check the Moderna sequence before you assume it has the same Kan gene as Pfizer. Yes, their KAN1C assay doesn’t even match the Moderna reference sequence and they conveniently leave this data out of the paper.
Now they are at least being honest. Yes the amplicon size will change the quant. Precisely why we reject the TGAs authority on this as they have never revealed the length of their amplicon or the primer sequences they use.
But their Illumina data is not an assessment of DNA fragment size as that platform selectively amplifies smaller fragments and fails to amplify large ones. They need ONT for this and even with ONT, they need to be careful to not omit the small fragments in their standard Ligation Based Sequencing library methods. These methods use a 0.7X-1X Ampure which will select for longer fragments. So this median length of the DNA on Illumina is a mirage that is easily manipulated with the DNA binding parameters used to process the library and the failure of Illumina to amplify 3.5kb fragments that have been found with ONT on these vials.
Parlor Trick #4)
The regulatory concern is for DNA over 200bp. We don’t agree with this limit as it based on naked DNA not LNP protected DNA. Their Figure 11 puts the line at 300bp to try to under emphasize the number of molecules over the limit. I will address this more quantitatively below.
This is next statement on page 14 is false as Kammerer and Speicher et al. address this.
Konig also addresses this in a different manner
But they seemed triggered.
Here they chest pound about a new method they developed that is so ‘validated’ it doesn’t agree with their other findings:) ILMN found DNA. Qubit found DNA, qPCR found DNA and your electrophoresis method found nothing! This is NOT consistent with your other findings. Go back to the drawing board and figure out why.
Another chest pound that just isn’t true. While it was a good idea to not use fragmentation for their Illumina libraries, the methods used DO NOT retain the original molecule sizes in any quantitative manner, particularly when 13 cycles of library PCR is being used on top of several Bead purifications steps that size select. They need ONT for this. When we performed ONT we have seen similar mean sizes (214bp average size) but with a very long tail that Illumina can’t measure. We have 3.5kb reads from ONT. This can’t be amplified or sequenced on Illumina. See the appendix below for a ChatGPT5.o dissection of this issue.
And here comes the name calling. “Anti-vaccine” groups. Thats real special coming from a lab that is funded to study vaccines.
This is where it would have been good for the ‘vaxophiles’ to be honest about these regulations being derived for vaccines that do not have transfection reagents involved. We need more than your suggestions and hunches when you mandate these with zero liability. Prove it. LNPs change all of this.
Another paper curiously omitted from Achs et al is Georgiou et al. When you use DNaseI to chop up DNA, it has a non-linear impact on PicoGreen or Qubit readings.
This under estimates the DNA quant by 70%. Wonder why these authors don’t care to speak about this effect?
They present some honesty here but again their spike in controls are inadequate to measure this unless its a 10bp ladder.
They end the paper with more insults. They are right and everyone else is an anti-vaxxer misinformation specialist.
As an aside, their Illumina read mapping data nicely displays why you can’t trust the TGA using a single non-disclosed KAN assay to assess the DNA contamination. The DNaseI is not digesting these plasmids uniformly so a single assay will never give you a complete quant. Moderna made this very clear in their own patent. This is discussed in Speicher et al.
Conclusions
Its frustrating to see such exuberant confidence when its clear these researchers have never engineered DNA purification systems or next generation sequencers. They make very overt gaffs presented with beaming over confidence in what their data means. I’m being charitable assuming these are gaffs and not engineered to mislead.
Nevertheless, the paper will likely be scooped up by some of the high impact vaccine parroting pharma journals and the review process is unlikely to find any of these faults as the paper recites the preferred safe and effective psalms from start to finish.
This is not a shoe-string study. It was a lot of work and well funded. Who funded it is yet to be disclosed and the authors were not honest with their institutions current conflicts of interest working in the mRNA vaccine space. I would not advance this to publication as the text is dripping with vitriol and bias and the methods have at least 4 major Parlor Tricks designed to mislead the reader.
Appenedix
Sequencing analysis
I only downloaded 1 of the Illumina sequencing samples from NCBI
This run contained ~64M paired reads which is massive overkill for sequencing a 8kb plasmid. Again, every sample was sequenced to wasteful proportions suggesting budget is not their top priority.
I trimmed SRR34932925 using cutadapt
cutadapt -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCA -A AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT -o SRR34932925_1.fastq.gz -p SRR34932925_2.fastq.gz
I down sampled the data using reformat.sh
reformat.sh in1=SRR34932925_1.trim.fastq.gz in2=SRR34932925_2.trim.fastq.gz out1=SRR34932925_1.trim.sub.fastq.gz out2= SRR34932925_2.trim.sub.fastq.gz samplerate=0.01
Mapped these reads with BWA mem to the Pfizer bivalent reference.
bwa mem -t 8 Pbiv1_WM_k141_107.fa SRR34932925_1.trim.sub.fastq.gz SRR34932925_2.trim.sub.fastq.gz| samtools sort -o Pbiv_slovakia.sub.sorted.bam
Samtools was used to index the BAM file and IGV used to view the reads.
samtools index Pbiv_slovakia.sub.sorted.bam
The variants (vertical colored bars) are expected in the spiked region as we mapped their monovalent sample to a Pfizer reference that was assembled from a Bivalent vaccine. You can see higher sequencing coverage over the plasmid backbone on the left than on the spike region on the right. This can be a result of either differential DNAaseI activity in the Pfizer manufacturing process or the residual modRNA interfering with T4 DNA ligase used in the Illumina adaptor ligation process.
The mapped insert size of these reads is charted below.
samtools stats -f 0x2 -F 0x904 Pbiv_slovakia.sub.sorted.bam > stats.txt
grep ^IS stats.txt | cut -f2- > is.txt
gnuplot -e "set term png size 1000,600; \
set output 'insert_hist.png'; \
set xlabel 'Insert size (bp)'; \
set ylabel 'Pair count'; \
plot 'is.txt' using 1:2 with lines title 'Insert size'"
Note, Illumina sequences have an insert size bias. They preferentially amplify smaller molecules and fail to amplify larger ones.
ChatGPT5.o agrees…
Exactly — you’re seeing the key flaw. The Illumina insert-size distribution in a sequencing library ≠ the original size distribution of DNA in the sample.
Here’s why:
🔹 Where insert-size distortion comes from
DNase I fragmentation
Produces a wide distribution of breakpoints, but long molecules survive in low numbers.
Bias toward smaller, more readily ligatable pieces.
SPRI bead cleanups
Ratio-dependent: systematically deplete fragments outside the target window (especially long ones).
Multiple bead steps compress the distribution toward smaller inserts.
PCR (10–13 cycles)
Shorter templates amplify more efficiently.
Long templates drop out or plateau earlier.
Duplication inflates the apparent representation of short inserts.
Cluster generation on Illumina (NextSeq 2000, ExAmp chemistry)
Patterned flow cells seed and amplify short inserts far more efficiently.
Long templates (500–800+ bp) form fewer clusters and are under-represented in the final data.
Instrument run configuration
Using PE150/PE100, you never observe “full-length” 600–800 bp molecules anyway — just ends and inferred TLEN.
Alignment heuristics + overlapping reads can mis-estimate insert size.
🔹 What insert size actually tells you
The plot you showed (peak ~120–170 bp, tail out to ~400) is typical for library prep + ExAmp physics, not a faithful copy of the input DNA population.
It tells you what clusters successfully amplified and sequenced, not what was originally present.
Long molecules can exist in the sample yet never appear in the data.
🔹 So are their conclusions supported?
If the authors claim “Illumina insert sizes prove that all contaminating DNA in the vaccine was ~150 bp” → that’s not supported.
Too many biasing steps select for or inflate small fragments.
Long molecules could be present but invisible to this workflow.
The most you can say is “the clusterable, sequenced library fragments had a modal insert size of ~150 bp under this prep and platform”.
✅ Bottom line
Illumina insert-size histograms are not conclusive evidence of the underlying contaminant DNA size in the Pfizer vaccine prep.
They reflect library construction and platform biases.
To resolve original size distributions, you’d need an orthogonal method: e.g. Bioanalyzer/Fragment Analyzer of input DNA, qPCR amplicon spanning, or long-read sequencing (ONT/PacBio) without heavy size bias.
And when the bombastic language used in this preprint is fed into ChatGPT5.o it also disagrees.
Yes — that paragraph is also overstated, because the Illumina short-read workflow doesn’t truly preserve the “native” size distribution even when you omit enzymatic fragmentation.
Here’s a breakdown with disclaimers added:
Original claim
“...omitting the enzymatic fragmentation step during NGS library preparation and using paired-end sequencing. This allowed us to preserve and analyse the native fragment size distribution.”
⚠️ Disclaimer:
Omitting enzymatic fragmentation does prevent artificial shortening, but multiple other steps (end-repair, adapter ligation, bead-based size selection, PCR amplification, and the cluster chemistry on patterned flow cells) still bias against long molecules. Therefore, the measured insert size distribution reflects the subset of fragments that efficiently survive library prep and cluster on Illumina, not necessarily the true in situ distribution of residual DNA.
Original claim
“We found that the residual DNA was highly fragmented, with a median fragment length of approximately 150 base pairs.”
⚠️ Disclaimer:
This accurately describes the library inserts that were sequenced, but does not prove all residual DNA is degraded to this size. Longer molecules may exist but go under-detected due to preferential loss and under-clustering.
Original claim
“This degree of degradation is consistent with prior regulatory assumptions that residual DNA in biologics is typically non-functional and not competent for replication or integration.”
⚠️ Disclaimer:
The ~150 bp median insert size does not, by itself, establish biological inertness. Regulatory guidelines cite risk reduction based on low abundance and probability of functional sequences, but fragment size alone cannot exclude replication-competent or integration-capable molecules if longer fragments are present but not efficiently sequenced.
Original claim
“The observed fragment length falls below the threshold associated with potential biological activity, providing additional reassurance regarding its biological inertness...”
⚠️ Disclaimer:
There is no absolute fragment-length threshold for biological activity:
Some regulatory discussions use 200 bp as a cutoff for “low concern,” but this is not a hard biological boundary.
Plasmid backbones with strong promoters/origins could remain functional if present as longer molecules, even at very low frequency.
Only orthogonal methods (e.g., Bioanalyzer sizing, long-read sequencing, qPCR spanning long intervals, transformation assays) can demonstrate whether long, intact sequences are absent.
✅ Balanced Restatement
A fragmentation-free Illumina workflow produced libraries with median insert sizes of ~150 bp. This indicates that the clusterable DNA fraction detectable by this method is enriched for short fragments. However, library-prep and sequencing biases strongly disfavor long molecules, so these results do not rule out the presence of longer plasmid-derived DNA. Consequently, conclusions about the absolute absence of biologically active or replication-competent DNA fragments require orthogonal validation methods.
Would you like me to draft a peer-review style critique of this section, line by line, that you could use as formal reviewer feedback (with citations to FDA/WHO guidance on residual DNA in biologics)?
You will notice they try to downplay the number of reads over 300bp in Figure 11. The more important regulatory question is the number of reads over 200bp.
When I map downsampled sample FP9632 , I get closer to 30% of the reads being over 200bp, not the 7.20% they report over 300bp. This is another Parlor Trick.
samtools view -@8 -f 0x2 -F 0x914 Pbiv_slovakia.sub_0.1.sorted.bam | awk '{t=$9; if(t<0)t=-t; if(t>0){total++; if(t>200) gt200++; if(t>300) gt300++}} \
END{printf(">%dbp: %.2f%%\n>%dbp: %.2f%%\n",200,100*gt200/total,300,100*gt300/total)}'
>200bp: 29.65%
>300bp: 8.03%
Please feel free to comment if you find other Parlor tricks in the paper. I got a bit bored after I found a few and noticed their condescending language.















































I’m no genomics expert but didn’t need to be when shortly after my pfizer covid shot injury, I started to look at journals and noted they all started with obsequious lauding of the shot saving millions of lives. I realized that this was the tell tale sign that they owe their allegiance to big pharma. Pharma has strategically, over time ensured their power and control by funding everything to do with their profits, which should be outlawed as it’s dangerous to our lives. Pharma has put in place ways and means around having to disclose funding.
Kevin, do you and other experts have the opportunity to discredit this captured paper, the way you just did in this substack in peer review? I hope within the new HHS, someone knows how to analyze this properly. Thank you for the continued and exasperating job in pursuing the truth despite the Goliath you’re up against.
Brilliant attorney Jeff Childers (Coffee and COVID Substack) cuts through the arguing about statistical and scientific crap (As Floridian Dr. Ladapo did recently) by demolishing the Utilitarian underpinning of the vaccine genocide, i.e. "a few people injured by the vaccine are a small price to pay for (supposedly) millions of lives saved." Utilitarianism is a violation of the fundamental human right of self-ownership. If you can be forced to take a shot against your will, you are no longer free. You are a slave or an animal whose master decides what will be done with you.
Read and share the heck out of this: https://www.coffeeandcovid.com/p/battle-cry-of-freedom-thursday-september