arrow-left

All pages
gitbookPowered by GitBook
1 of 3

Loading...

Loading...

Loading...

Association tests

hashtag
Endpoint

We included ​​2,861​ endpoints in the analysis. Endpoints with less than 80 cases among the 260,405 samples were excluded, as well as endpoints labeled with an OMIT tag in the endpoint definition file.

hashtag
Null models

For null model computation for each endpoint, we used age, sex, 10 PCs and genotyping batch as covariates. Each genotyping batch was included as a covariate for an endpoint if there were at least 10 cases and 10 controls in that batch to avoid convergence issues. One genotyping batch need be excluded from covariates to not have them saturated. We excluded Thermo Fisher batch 16 as it was not enriched for any particular endpoints.

For calculating the genetic relationship matrix, only variants imputed with an INFO score > 0.95 in all batches were used. Variants with > 3 % missing genotypes were excluded as well as variants with MAF < 1 %. The remaining variants were LD pruned with a 1Mb window and r2 threshold of 0.1. This resulted in a set of 59,037 well-imputed not rare variants for GRM calculation.

options for the null computation:

  • LOCO = false

  • numMarkers = 30

  • traceCVcutoff = 0.0025

hashtag
Association tests

We ran association tests against each of the 2,861 endpoints with for each variant with a minimum allele count of 5 from the imputation pipeline (SAIGE optionminMAC = 5). We filtered the results to include variants with an imputation INFO > 0.6.

ratioCVcutoff = 0.001

SAIGEarrow-up-right
SAIGEarrow-up-right

GWAS

We used the SAIGE software for running R6 GWAS as we did in previous releases. SAIGE is a mixed model logistic regression R/C++ package. We used code of version 0.39.1: https://github.com/weizhouUMICH/SAIGE/tree/finngen_r6_jk arrow-up-right We made two modifications to SAIGE 0.39.1 codebase (neither modification affects the method):

  • Null model .rda objects are trimmed to reduce RAM consumption

  • Ref hom, het, and alt hom counts in cases and controls are included in the output, summing the probabilities of each genotype over individuals, different from the 0.39.1 implementation in SAIGE in which the counts are sums of most probable genotypes over individuals

We analyzed:

  • ​2,861 endpoints

  • 260,405 samples

  • 16,962,023 variants

We included the following covariates in the model: sex, age, 10 PCs, genotyping batch.

Sample QC and PCA

This is a description of the quality control procedures applied before running the GWAS.

hashtag
PCA

The PCA for population structure has been run in the following way:

hashtag
Variant filtering and LD pruning

The imputation panel is pruned iteratively, until a target number of SNPs is reached:

8,580,565 starting variants: only variants with a minimum info score of 0.9 in all batches are kept.

The script starts with [500.0, 50.0, 0.9] params in plink (window,step,r2). It then decreases 0.05 in r2 iteratively pruning the imputation panel until the threshold of 200000 snps is reached. Once the SNP count falls under 200000 the closest pruning is returned.

If the higher r2 is closer, 200,000 snps are randomly selected, else the last pruned snps are returned.

Plink flags used: --snps-only --chr 1-22 --max-alleles 2 --maf 0.01

For this run the final ld params are --indep-pairwise 500.0 50.0 0.2 and 200,000 snps are returned.

hashtag
PCA outlier detection

Then, FinnGen data was merged with the 1k genome project (1kgp) data, using the variants mentioned above. A round of PCA was performed and a bayesian algorithm was used to spot outliers. This process got rid of 5,995 FinnGen samples. The figure below shows the scatter plots for the first 3 PCs. Outliers, in green , are separated from the FinnGen red cluster.

While the method automatically detected as being outliers the 1kgp samples with non European and southern European ancestries, it did not manage to exclude some samples with Western European origins. Since the signal from these samples would have been too small to allow a second round to be performed without detecting substructures of the Finnish population, another approach was used. The FinnGen samples that survived the first round were used to compute another PCA. The EUR and FIN 1kg samples were then projected onto the space generated by the first 3 PCs. Then, the centroid of each cluster was calculated and used to calculate the squared mahalanobis distance of each FinnGen sample to each of the centroids. Being the squared distance a sum of squared variables (with unitary variance, due to the mahalanobis distance), we could see it as a sum of 3 independent squared variables. This allowed to map the squared distance into a probability (chi squared with 3 degrees of freedom). Therefore, for each cluster, a probability of being part of it was computed. Then, a threshold of 0.95 was used to exclude FinnGen samples whose relative chance of being part of the Finnish cluster was below the level. This method produced another 290 outliers. The figure below shows the first three principal components.

FIN 1kgp samples are in purple, while EUR 1kgp samples are in blue. Samples in green are FinnGen samples who are flagged as being non Finnish, while red ones are considered Finnish.

hashtag
Kinship

Then all pairs of FinnGen samples up to second degree were returned. The figure below shows the distribution of kinship values.

Then, the previously defined “non Finnish” samples were excluded and 2 algorithms were used to return a unique subset of unrelated samples:

  • one called greedy would continuously remove the highest degree node from the network of relations, until no more links are left in the network.

  • one called native, based on a native implementation of python’s networkx package, performed on each subgraph of the network.

The largest independent set of either algorithm would be used to keep those sample, while flagging the others as “outliers” for the final PCA.

Then, the subset of outliers who also belong to the set of duplicates/twins was identified.

hashtag
Final PCA

To compute the final step the Finngen samples were ultimately separated in three groups:

  • 182,616 inliers: unrelated samples with Finnish ancestry.

  • 79,182 outliers: non duplicate samples with Finnish ancestries, but who are also related to the inliers.

  • 9,543 rejected samples: either of non Finnish ancestry or are twins/duplicates with relations to other samples.

Finally, the PCA for the inliers was calculated, and then outliers were projected on the same PC space, allowing to calculate covariates for a total of 261,798 samples.

hashtag
Sample filtering based on phenotype data

Of the 261,798 non-duplicate population inlier samples from PCA, we excluded 1,390 samples from analysis because of missing minimum phenotype data, and 3 samples because of a mismatch between imputed sex and sex in registry data. ​A total of 260,405 samples was used for core analysis. ​There are 147,061 females and 113,344 males among these samples.

hashtag
Further info

hashtag
Bayesian outlier detection

Documentation from the original developers of the algorithm can be found here: .

http://www.well.ox.ac.uk/~spencer/Aberrant/aberrant-manuarrow-up-right