Monday, October 21, 2013

Triple Anti-platelet Therapy

Role of platelets in acute coronary syndromes (ACS) is well established. Anti-platelet agents are standard of care for the prevention of ACS. However, due to the high risk of bleeding, anti-platelet agents except aspirin are not indicated for the prevention of ACS in primary prevention population as the risk of an ACS event is low. However, individuals with established CAD are at an increased risk of subsequent events and aspirin is indicated for such patients. In addition, those who has had a stent placed, usually get dual anti-platelet therapy (aspirin + either clopidogrel, prasugrel, or ticagrelor). One would imagine that in a very high-risk population, inhibition of an additional pathway may provide additional benefit. However, the TRACER trial, found that addition of vorapaxar, an oral protease-activated-receptor 1 (PAR 1) antagonist that inhibits thrombin-induced platelet activation, had no additional benefit in patient with acute coronary syndrome. In stead, addition of vorapaxar to the standard dual anti-platelet regiment was associated with increase risk of major bleeding. This study, suggested that perhaps too much of platelet inhibition may not be beneficial for ACS prevention but increases risk of bleeding.

More recently, a meta-analysis found that adding cilostazol, another anti-platelet agent that acts by inhibiting phosphodiestrase, to standard dual anti-platelet therapy was associated with 36% reduction in major adverse cardiac events (MACE; odds ratio (OR) = 0.64; 95% confidence interval (CI) = 0.51-0.81, P < .01), a 40% reduction (OR = 0.60, 95% CI = 0.44-0.80; P < .01) in target vessel revascularization (TVR), a 44% reduction (OR = 0.56, 95% CI = 0.34-0.91; P = .02) in target lesion revascularization (TLR) and a 47%/44% reduction in in-segment/in-stent restenosis (P < .01) and lower in-segment/in-stent late loss (P < .01). The effect sizes are large showing that the addition of cilostazol is very effective in reducing events. Cilostazol also inhibits smooth muscle contraction resulting in peripheral arterial dilatation. It is possible that the beneficial effect may be due to a combination of these two effects.

Thursday, October 17, 2013

Thrombocytopenia in vWD type 2B

Von Willebrand factor (vWF) is a chaperone protein for coagulation factor VIII and is essential for the recruitment of platelets to the growing thrombus under conditions of high shear stress usually present in the arterial system. Deficiency (quantitative or qualitative) of vWF is associated with bleeding tendency, clinically known as von Willebrand disease (vWD).   Type 2 vWD is due to functional defect in vWF and type 2B is associated with gain-of-function mutations in the exon 28 of vWF gene resulting in an increase in the affinity of VWF for platelets. The region encoded by exon 28 binds to the platelet vWF receptor, glycoprotein Iba (GpIba). Patient with type 2B vWD present with bleeding and moderate to severe thrombocytopenia as well as a decreased in the high molecular weight vWF multimers. Thrombocytopenia is associated with the presence of giant platelets and spontaneous platelet aggregates. The molecular mechanism underlying the thrombocytopenia are unclear.

GpIba is present on the surface of megakaryocytes as well on platelets. Thus it is possible that interaction of mutated vWF from patients with type 2B vWD with megakaryocytes results in decreased platelet formation and release of giant platelets. In fact, Nurden et al showed that this may be the case. They showed that culture of megakaryocytes from controls performed with or without purified vWF had a positive influence on platelet production with specific inhibition by an antibody blocking vWF binding to GpIba . Megakaryocytes cultured with vWF from patients with type 2B vWD showed disorganized demarcation membrane system and abnormal granule distribution when examined under electron microscopy. The platelets produced from such megakaryocytes had abnormalities similar to those found in patients with vWD type 2B. This impaired megakaryocytopoiesis could not only explain the occurrence of giant platelets, but also contribute to a lower platelet count in VWD type 2B patients.

In addition to defects in platelet production, there may also be defects in platelet utilization, that is increased uptake of platelets (with attached vWF) by the monocyte-macrophage system of the body. Casari et al showed that this is also the case in a series of experiments reported here. They found that vWD type 2B platelets have a shorter circulatory half-life than wild-type (wt) platelets, which could contribute to the lower platelet counts in vWD type 2B mice. Further analysis revealed that VWF type 2B is present at the surface of platelets of thrombocytopenic
vWD type 2B mice, and that these vWF/platelet complexes were taken up efficiently by macrophages in liver and spleen. Thus, they provide direct evidence that part of the thrombocytopenia in vWD type 2B can be explained by an increased clearance of VWF/platelet
complexes.

Monday, October 07, 2013

Thursday, September 26, 2013

Explanation of different options for normalizations in Cufflinks

This is the best explanation that I have seen so far on the different normalization schema available in Cufflinks and how it affects calculation for FPKM. I am copying it directly from the thread which can be seen here

“With cufflinks you can have three different normalizations: fragments mapped to genome (in millions), fragments mapped to transcriptome (in millions: --compatable-hits-norm) or upper quartile (-N). Regardless of the normalization the same number of reads is quantified at each gene. I've looked into it myself. If you run cufflinks using all three of those normalizations then look at each of the separate isoforms.fpkm_tracking files you can confirm it. Check for the coverage and FPKM columns. You should see different FPKMs but identical coverages across the three quantifications. Furthermore if you divide the FPKMs by each other you should see that at each gene there's a constant ratio between the FPKMs.

If you calculate FPKMs yourself you can see why the numbers shift around. To be honest the "FPKM" designation is misleading when you're using any normalization other than "mapped reads in millions". Right? Fragments per kilobase per million mapped reads is what you're used to.
So say we have a gene that's 2500 bases long. We've got 121 fragments that mapped to it and we've got 34.7 million fragments mapped to the genome. We can get the FPKM like so..

FPKM = 121/(34.7*2.5) = 1.394813

Say only 27.4 million fragments mapped to the transcriptome. So if you used --compatible-hits-norm then the calculation looks like this:

FPKM = 121/(27.4*2.5) = 1.766423

Those aren't that different from one another. Now if you use upper quartile we're talking about the upper quartile value of fragments mapped to genes in the sample. That number might be something like 12,000. Divide this value by 1e6 to put it into "millions" like you do with mapped fragments it becomes 0.012. So now the calculation looks like this:

FPKM = 121/(0.012*2.5) = 4033.333

So maybe it makes sense to scale the upper quartile normalization value by 1000 so that the "FPKM" comes out as 4.033 instead of 4033. That's reasonable. But it really shouldn't be called an FPKM because if you think about it it's like someone telling you there are 14 cars outside and you assume they mean 14...but they actually told you 14 in base 16 which would be 20 in base 10 (or maybe like expecting a measurement to be in cm but you're given the measurement in inches with a cm designation). It's not fragments per kilobase per million mapped reads, it's fragments per kilobase per upper quartile of read counts @ genes. So FPKPUQRCG. That name sucks.

The point of these different normalizations is only applicable to when you're comparing samples to each other. So if you're goal is to see if gene X is expressed higher in Sample A verses B then regardless of the normalization used (as long as you use the same one on both samples) you'll find your answer. The upper quartile normalization has been showing to be more robust so maybe it's better to use it for comparing samples to one another. Also, obviously, for the expression levels to make sense to other people we all need to be using the same normalization. We should probably all be using upper quartile normalization but that puts the numbers on a different scale than we used to seeing.”

Wednesday, July 31, 2013

Extracting phenotype type by genotype using GenABEL

GenABEL is an excellent R package for GWAS studies. It uses special data structure to efficiently store data. The data structure is quite useful and results in remarkable time saving when running GWAS, it does have some limitations. For example, often in GWAS studies one need to know phenotype distribution across genotype of some variant but I couldn’t find a straightforward way of looking at phenotype distribution across genotypes (there may be a better way of doing it but I couldn’t find it)

To get phenotypic information across genotypes I used the following approach (assuming that data is in an object called ‘data’

1. Abstract phenotypic information
pheno<- phdata(data)
The returned object is a dataframe and can be confirmed with class(pheno) command

2. Abstract SNP data
snps <- as.character(data[, c("SNP1", "SNP2", "SNP3")])
You can change as.character in the line above with as.numeric if you want to get genotype information in 0,1,2 format.
The returned object is a matrix with row numbers as subject ID. Thus we need to do two things with this matrix. First we have to convert it into a dataframe and then we have to convert rownames into a column of id

3. Convert matrix 'snps' into a dataframe with row names as an additional column
snps.df<-data.frame(as.numeric(rownames(snps)),snps)
colnames(snps.df)[1]="id"
                          ### Change the column name to 'id'
snp.data <- merge(pheno, snps.df, by="id")  

Now you have a dataframe with phenotype data and SNPs genotype data

Sunday, June 30, 2013

Downloading and Merging NHANES datasets in R

The National Health and Nutrition Examination Survey (NHANES) is a program of studies designed to assess the health and nutritional status of adults and children in the United States. The survey is unique in that it combines interviews and physical examinations. The data files for more recent surveys are given in SAS Export format. To read these files in to R, one needs to use functions in the foreign package. If you don’t have this package, you may need to install it first. In the first step, we download these files and then in the second step we import these files to R.

# load foreign package (Converts data files into R)
require(foreign)    

# Set your working directory
setwd( "<YOUR WORKING DIRECTORY>")

### Download demographics file of NHANES 2005-2006 dataset
download.file(
ftp://ftp.cdc.gov/pub/Health_Statistics/nchs/nhanes/2005-2006/DEMO_D.XPT,
"Demo0506.xpt", mode='wb')

###Read downloaded file
Demo56<-read.xport("Demo0506.xpt")

### Download Blood pressure file of NHANES 2005-2006 dataset
download.file(
ftp://ftp.cdc.gov/pub/Health_Statistics/nchs/nhanes/2005-2006/BPX_D.XPT,
"BP0506.xpt", mode='wb')

### Read downloaded file
BP56<-read.xport("BP0506.xpt")

### Merge the two files
N_05_06 <- merge(Demo56, BP56, all=T)

You can download several files and then merge them together to get your dataset.

Saturday, June 29, 2013

Updating R – in Windows 7

R is a great statistical software with tremendous flexibility. However, there is not a very straightforward (point and clinic) way of updating it. R Users have developed several different methods of updating R with its packages, including one described on CRAN.

I came across this one post, it is about updating R on Mac; tried it on Windows 7 with minor changes and it worked fine.

So here is what I did:
First, in the older version I wrote following commands
tmp <- installed.packages()
installedpkgs <- as.vector(tmp[is.na(tmp[,"Priority"]), 1])
save(installedpkgs, file="installed_old.rda")

Then I downloaded and installed newer version of the R. In the newer version of R I wrote following commands:
source(
http://bioconductor.org/biocLite.R)
biocLite()
load("installed_old.rda")
tmp <- installed.packages()
installedpkgs.new <- as.vector(tmp[is.na(tmp[,"Priority"]), 1])
missing <- setdiff(installedpkgs, installedpkgs.new)
for (i in 1:length(missing)) biocLite(missing[i])

All packages were automatically installed to the newer version. Then, I went to Windows Control Panel and uninstalled the older version of R.

That’s it!

Wednesday, May 22, 2013

Some R Code Using SAScii

For example, I want use Adult Demographic File from NHANES III. To import it using SAScii

library(SAScii)
SAScode <- "ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/nhanes/nhanes3/1A/adult.sas"
ftpdata <-"ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/nhanes/nhanes3/1A/adult.dat"
data <- read.SAScii(ftpdata, SAScode, beginline=5)

Because the line with INPUT in the SAS code file begins at line 5, I gave the option begineline=5. Of course, it assumes that you have downloaded and installed the package ‘SAScii’. Now you can save this file in any desirable format or use it for further downstream analysis. It does take quite sometime, longer than what it would take using SAS, but it does produce desired output.

Tuesday, May 21, 2013

An Interesting R Package–SAScii

For several surveys, such as NHANES III, data for many files is available in ASCII format with SAS code. However, those of us, who want to use that data in R, it is little cumbersome to use first SAS, import data into it, then transfer data into a format that can be imported into R. I came across of this relatively new package, SAScii, that uses the ASCII format and then SAS code to directly import data into R. Quite nice.

REDUCE Trial – My Thoughts

This week, REDUCE trial was published in JAMA “Short-term vs Conventional Glucocorticoid Therapy in Acute Exacerbations of Chronic Obstructive Pulmonary Disease; The REDUCE Randomized Clinical Trial” JAMA. 2013;():1-9. doi:10.1001/jama.2013.5023.

REDUCE (Reduction in the Use of Corticosteroids in Exacerbated COPD), was a randomized, non-inferiority multicenter trial in 5 Swiss teaching hospitals that enrolled 314 patients (between March 2006 through February 2011) who had presented to the emergency department with acute COPD exacerbation and were past or present smokers (≥20 pack-years) without a history of asthma. Participants were treated with 40 mg of prednisone daily for either 5 or 14 days in a placebo-controlled, double-blind fashion. The predefined non-inferiority criterion was an absolute increase in exacerbations of at most 15% or a 6 months follow-up. The trial found that 5-day treatment with systemic glucocorticoids was non-inferior to 14-day treatment with regard to re-exacerbation within 6 months of follow-up but significantly reduced glucocorticoid exposure.

The trial results are interesting for those of us who actually practice medicine and see COPD patients on a regular basis with exacerbations. There are three settings in which the results of this trial can potentially impact practice. In office practice, there are definitely some patients who would do just fine with 5 days course of prednisone and these are patients who get prescribed Medrol dose pack. On the other hand, there are patients who need longer courses of steroids and for such patients it is important to give them longer courses of steroids (up to 14 days and sometimes even longer) to keep them out of hospital. Interestingly, based on only the severity of the patient’s symptoms and signs alone, it is impossible to predict who will need longer treatment. It is only history of the patients that tells what will work. The practice in Emergency Department is unlikely to be much different than in an office setting except that there may be patients with more severe exacerbation. There too, history is the only helpful thing. However, once patients are admitted to the hospital, it is likely that these are those subsets of patients who didn’t respond as quickly to steroids in ED and thus needed admission with persistent severe symptoms. For such patients, it remains a possibility that a larger number (if not all) of them will need longer therapy.

Thus, in my view, if a patient is new to me and presents with COPD exacerbation and I don’t have historical information on this patient, I will feel comfortable in giving this patient a 5-day course of steroids. Otherwise, if I have some additional information telling me that shorter course will not be helpful, I should go for longer course.

Monday, April 15, 2013

VPREB3 and Platelets

VPREB3 protein is the human homolog of the mouse VpreB3 (8HS20) protein, and is specifically expressed in cell lines representative of all stages of B-cell differentiation. It is also related to VPREB1 and other members of the immunoglobulin supergene family. This protein associates with membrane mu heavy chains early in the course of pre-B cell receptor biosynthesis. The precise function of the protein is not known, but it may contribute to mu chain transport in pre-B cells.

This protein doesn’t appear to be detectable in platelet proteome studies but its transcript is present in platelets (detected by both RNA-seq and microarray studies). Its role in platelet biology remains unclear. Even interesting is that the RNA-seq experiment found a much lower expression level than the microarray experiment (0.15 RPKM vs. 27250 MFI). Perhaps the level of gene expression is quite variable from person to person.

CD23 and Platelets

CD (Fc Epsilon Receptor II), has been shown to be present in platelets and may play a role in platelet aggregation. However, none of the publically available platelet proteome databases (Martens et al. Proteomics 2006; Burkhart et al, Blood 2012; Vaudel et al, Journal of Proteome Research 2012) have found this specific protein in platelets. When looking at transcriptome, CD23 RNA doesn’t appear to be present in megakaryocyte (using microarray). However, platelet RNA-seq analysis have found low levels of transcript in platelets (RPKM = 0.37).

While it is easy to speculate why there is such a discrepancy, it is possible that CD23 is induced in people with some allergy exposure and in individuals who are otherwise healthy (as were people in the studies that failed to find CD23), this transcript and its product may not be detectable.

It will be worthwhile to look at individuals with allergic responses (or parasitic infections) and examine whether they have higher expression of CD23 gene and protein. Comparing platelet aggregation in individuals with allergies and those without may also be illuminating.

Tuesday, March 12, 2013

Here comes STREAM ……

The results of STREAM were presented at ACC meeting and study was published online in NEJM – “Fibrinolysis or Primary PCI in ST-Segment Elevation Myocardial Infarction”. In nutshell the results can be summarized as pre-hospital fibrinolysis with bolus tenecteplase in conjunction with timely coronary angiography was similar to primary PCI in patients with early STEMI who could not undergo primary PCI within 1 hour after the first medical contact. Patients who failed fibrinolysis underwent emergent PCI (36.3% of the fibrinolysis group). Cardiogenic shock and congestive heart failure occurred more often in the primary PCI group and intracranial hemorrhage and ischemic strokes were more frequent in the fibrinolysis group. The study findings are likely to be reassuring for some parts of the world while may change treatment strategies at other places.