Quantcast
Channel: Intangible Arts » awk
Viewing all articles
Browse latest Browse all 3

awk to subset records based on IDs from a different file

$
0
0

I recently had to extract 400 non-contiguous records from a tab-delimited file that contained ~25K records, each with ~199K columns. In this case, the file contained genetic data for ~199K SNPs, for ~25K patients. The records I wanted to extract had to map to an identifier listed in a separate file (say, list_of_samps_for_test_set.txt).

Here is a solution using bash and awk. Takes 8.3s on my VM  (51Mb RAM, 2.90GHz 6core). The code demonstrates how to pass arrays from bash to awk, and how to extract lines where the first column in the larger file matches the first column from a smaller table of IDs to be extracted.

Also, I had to use gawk; awk gave me an error because the genotype file had >130K columns.


# file which contains subjects used in test
testsamp_file=file_with_sample_IDs_to_extract.txt
# file with genotype info
dos_file=file_with_entire_patient_genotype_data.txt
outFile=out.txt

# store IDs for use
want_id=`tail -n+2 $testsamp_file | cut -f 1`;
#echo ${want_id[@]}

# now run through huge dosage file and extract rows we want to use for
# our example}
cat $dos_file |
 gawk -F' ' -v var="${want_id[*]}" '
 BEGIN {
   ctr=0;
   # bash arrays need to be split in awk again anway
   split(var,tmplist," ");
   # i is index
   for (i in tmplist) {newlist[tmplist[i]]++;}

   # i is now an element
   #for (i in newlist) {print i};
 } {
 if ($1 in newlist) print; ctr++;
 } 
 #END { print ctr " records"; }
' > $outFile

 

 


Viewing all articles
Browse latest Browse all 3

Trending Articles