Transplant nephrectomy improves survival following a failed renal allograft. J Am Soc Nephrol 21: 324–380, 2010
169 170There is a lack of consensus among transplant physicians regarding the optimal management of the large number of patients who return to dialysis (approximately 2000 per year) with a failed kidney transplant (1–3). The high rate of morbidity and mortality experienced by patients after primary kidney transplant failure exceeds those observed in dialysis patients who are awaiting kidney transplantation (4–7).
Dialysis patients with failed previous transplants have been shown to have a standardized mortality ratio of 1.35 compared with dialysis patients who have never received a renal transplant (8). Several predictors of patient survival after allograft failure have been identified, such as age, diabetes, HLA matching, gender, and first transplant donor type (1,9). Mortality after primary allograft failure is strongly influenced by the type of ESRD, the highest rate found among patients with type 1 diabetes compared with either type 2 diabetes or other causes of ESRD (1).
The reasons for the poor survival after graft failure have remained elusive. The triggering of an inflammatory process by the retained allograft (characterized by hypoalbuminemia, erythropoietin resistance, high ferritin, and elevated C-reactive protein) and an increased incidence of infectious and cardiac complications as a result of ongoing immunosuppression have been observed but not yet confirmed by controlled clinical studies (10–12).
The goal of this observational study was to evaluate the impact of transplant nephrectomy on the survival of patients who return to dialysis after kidney transplant failure. Using the US Renal Data System, the authors identified and analyzed a cohort of patients (n = 10,951) who returned to dialysis (hemodialysis and peritoneal dialysis) during a 10-year period spanning from January 1994 through December 2004.
Findings.
Thirty-one percent (n = 3451) of the study cohort underwent a transplant nephrectomy during the study period. The procedure was associated with a 32% lower adjusted relative mortality risk (adjusted hazard ratio 0.68; 95% confidence interval 0.63 to 0.74). The median time between the return to dialysis and nephrectomy was 1.66 years (interquartile range 0.73 to 3.02). The group that underwent a nephrectomy was composed of younger individuals who were less likely to have diabetes and/or cardiovascular disease (congestive heart failure and cerebrovascular and peripheral vascular disease). These patients were more likely to be black; to have required the use of T cell–depleting antibodies (thymoglobulin, OKT3); and to have experienced anemia, sepsis, and urinary tract infections. Despite these complications, the rate of death within 30 days of the transplant nephrectomy was only 1.5% (53 deaths of 3451 patients). Lastly, they were more likely (10 versus 4.1%; P < 0.001) to receive a second transplant when compared with those who did not undergo a nephrectomy.
Commentary.
The long-held clear-cut indications for transplant nephrectomy are fever, infection, hematuria, and a painful allograft. Most of these symptoms are the result of rejection and/or allograft ischemia (venous and arterial thrombosis). The increased morbidity and mortality (range 6 to 37%) associated with the nephrectomy of a failed allograft are undisputed (13,14). Conversely, no consensus has been reached regarding the effects of either early or late allograft nephrectomy on repeated renal transplant survival (15–17) and/or alloimmunity (18–20).
The retrospective, observational analysis provided by Ayus et al. is insightful and suggests that the traditional indications for failed allograft nephrectomies may be in need of revision. The patient and future renal transplant survival outcomes after nephrectomy for graft-related symptoms are probably different from those observed when the procedure is done electively. No information regarding the indication for the procedure and/or the type of prescribed immunosuppression is provided in this study. If the need for nephrectomy were triggered by an immune event (e.g., humoral rejection), then the procedure may just be a marker of high immune responsiveness and an indication of an adverse outcome with repeated transplantation (i.e., selection bias). The study design (observational analysis of administrative data) does not permit a random selection of allograft nephrectomy and represents a potential source of bias, although the authors did apply a series of adjustments such as sensitivity analyses and propensity scores to try to minimize this effect; however, as noted by the authors, the most convincing evidence for the role of transplant nephrectomy in asymptomatic patients remains to be investigated in prospective, randomized clinical studies.
Footnotes
-
Published online ahead of print. Publication date available at www.cjasn.org.
References
The success of continued steroid avoidance after kidney transplantation in the US. Am J Transplant 9: 2768–2776, 2009
170 171The use of steroids remains the proven standard of care for renal transplant patients. Its use has been associated with a myriad of metabolic adverse effects (well known to the reader), which, in turn, has motivated many transplant patients to request their discontinuation.
Despite the initial encouraging news from small, single-center studies, the largest prospective, randomized, long-term (>5 years) trial to date failed to show a significant metabolic benefit from steroid avoidance/minimization (1). In addition, steroid withdrawal studies have been characterized by a high incidence of biopsy-confirmed mild acute rejection (Banff 1a and 1b). These findings have been more pronounced in patients at high immunologic risk (e.g., the young; patients of black race) and are primarily seen with delayed and slow steroid tapers (carried out >6 months from the time of transplantation). As a result of this experience and the introduction of more potent immunosuppressants such as antithymocyte globulin, anti-CD25 antibodies (daclizumab and basiliximab), and anti-CD52 antibodies (alemtuzumab), slow steroid withdrawal protocols have been replaced by protocols in which steroids are minimized or avoided. The avoidance protocols in particular are characterized by either no corticosteroid therapy or the full elimination of steroids within 7 days of transplantation.
A recent US analysis from the Scientific Registry of Transplant Recipients concluded that patients who are on steroid avoidance therapies do not have a worse outcome compared with patients who are on maintenance steroids regimens (2). A recent discussion of this retrospective analysis concluded that despite a possible selection bias, the findings were indicative that patients could be safely discharged on a steroid-free regimen (3). The development of interstitial fibrosis and tubular atrophy (so-called chronic allograft nephropathy) and a higher incidence of more severe rejection (Banff 2a) has been identified in long-term avoidance steroid studies (1). In some cases, it may take as long as 5 years to see renal allograft function impairment in patients who receive a steroid avoidance regimen (4).
The aim of the study by Schold et al. was to determine which factors are associated with the failure of transplant patients to remain free of steroid therapy 6 months after transplantation. They used multivariate logistic regression and Kaplan-Meier and Cox proportional hazard models to evaluate patient and graft survival from deceased- and living-donor recipients. In their study of >80,000 patients, they categorized the transplant recipients into three groups (N = 84,647): (1) Patients who received steroids uninterruptedly after transplantation (n = 60,429 [71%]), (2) patients who received no steroids since transplantation (n = 18,591 [22%]), and (3) patients who were started on steroids after a 6-month steroid-free period (n = 5627 [7%]). They obtained their information from the Scientific Registry of Transplant Recipients data system focusing on the period from 2002 through 2008.
Findings.
Patients who remained steroid-free were more likely to be characterized as having low immunologic risk (e.g., living-donor transplant recipients, white patients, unsensitized, primary transplant recipient). Among the risk factors implicated in the resumption of steroids among deceased-donor recipients were black race, high sensitization, HLA mismatching, retransplantation, and recipients of organs from older donors; these patients were identified as having high immunologic risk. A similar trend was observed for living-donor recipients who resumed steroids 1 year after transplantation, in which factors including donor age >60 years, high sensitization, less preemptive transplantation, greater likelihood of an unrelated donor, increased HLA mismatching, and retransplants were noted. Graft survival was not different between the steroid avoidance group and those who were maintained on corticosteroids (adjusted hazard ratio [AHR] 1.01; 95% confidence interval [CI] 0.96 to 1.07). Conversely, the need to resume steroid therapy was clearly detrimental to the outcome of the renal allograft (AHR 1.20; 95% CI 1.11 to 1.29) and to patient survival (AHR 1.17; 95% CI 1.06 to 1.29). There was no statistical difference between deceased-donor and living-donor recipients.
Induction therapy with alemtuzumab and the use of tacrolimus were associated with a lesser need for initiation of steroids at 6 months. Lastly, there was a suggestion (P < 0.001) of a center effect regarding the use of a steroid-free protocol and resumption of steroid therapy.
Commentary.
The introduction of more potent immunosuppressive agents (anti-thymocyte globulin, alemtuzumab, tacrolimus) has facilitated the long-sought development of steroid-free/minimization regimens; however, this management approach should be individualized. On the basis of the study findings of Schold et al., until the long-term safety and efficacy of steroid minimization strategies are validated (by controlled clinical trials with adequate long-term follow-up), the use of steroid avoidance/minimization protocols in patients at high immunologic risk—characterized by black race, a high degree of sensitization (panel-reactive antibody >30%), HLA mismatching, retransplants, and organs from older donors—should be discouraged.
References
Factors associated with progression of interstitial fibrosis in renal transplant patients receiving tacrolimus and mycophenolate mofetil. Transplantation 88: 897–903, 2009
171 172The most common cause of renal allograft loss among patients who return to dialysis or undergo retransplantation is chronic allograft injury, which is morphologically characterized by interstitial fibrosis and tubular atrophy (IF/TA). The severity of the sclerosing kidney graft injury is primarily graded by the extent of tubulointerstitial damage. Clinically relevant Banff scores for interstitial fibrosis and tubular atrophy of ≥2 (representing ≥25% of the cortical area involved) are said to be clinically relevant. It has been suggested that this chronic allograft injury is the result of two different types of injury. An early phase (<1 year), which is characterized by mild tubulointerstitial damage, is the result of ischemic-reperfusion injury and subclinical and clinical allograft rejection. A late phase (>1 year), characterized by arteriolar hyalinosis, glomerulosclerosis, and additional tubulointerstitial damage, is the result of hypertension, calcineurin inhibitor (CNI) nephrotoxicity, and chronic humoral rejection (1). The acceptance of this framework implies that we should closely monitor the renal allograft compartment for markers of subclinical rejection, use potent immunosuppression in the early posttransplantation period (<6 months), and, later (>6 months), we should consider minimizing not only the dosage of immunosuppression but also the use of less nephrotoxic immunosuppressive agents. Addressing these goals has been the subject of multiple single-center and multicenter trials, but, so far, the identification of a safe and beneficial method has remained elusive. It must be kept in mind that once the “fibrotic process” begins, it inexorably leads to allograft loss. Over the years, many antifibrogenic strategies have been proposed (avoidance or dosage reduction of CNIs, use of mycophenolate mofetil, use of angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin II receptor blockers (ARBs), blockade of endothelin or anti–TGF-β), but, unfortunately, none of them has consistently provided beneficial results.
The authors of this study reported in a previous open-label, multicenter, randomized trial a very low (<5%) incidence of subclinical rejection during the first 6 months after transplantation (2). The study cohort (n = 218) consisted of patients who had low immunologic risk (white race, panel-reactive antibody <5%); received no induction therapy; and were treated with tacrolimus, mycophenolate mofetil, and prednisone. The study patients were randomly assigned to undergo protocol biopsies at engraftment and 1, 2, 3, 6, and 24 months (n = 111; biopsy group) or a biopsy at engraftment and at 6 and 24 months (n = 107; control group). The primary aim was to determine the prevalence of IF/TA (≥2). An additional goal was to determine the clinical value of protocol biopsies in the treatment of this selected group of patients. No statistical difference (P = 0.28) was found between the two groups, leading the authors to conclude that protocol biopsies are of no value in the treatment of these type of patients in the early posttransplantation period (2).
Findings.
This report is an extension (6 to 24 months) of the biopsy findings (25% of the original patients were discontinued from the study for various reasons) described already. By 24 months and despite the presence of preserved renal function (calculated creatinine clearance approximately 74 ml/min) and low-grade proteinuria (<500 mg/d), 40 to 50% of the patients had developed IF/TA ≥2. A logistic regression analysis identified deceased-donor status (did poorer) and the use of ACEIs and/or ARBs (did better) as the only two independent predictors of this outcome. Of note, the use of ACEIs and ARBs was identified by a post hoc analysis and was found to reduce the odds of IF/TA by four-fold.
Commentary.
The recruitment of profibrogenic cytokines (IL-1, IL-6, TGF-β, TNF-α, and adhesion molecules) by early intragraft events (e.g., ischemia-reperfusion injury, subclinical rejection) triggers a cascade of events that result in augmentation of collagen synthesis and ultimately lead to the demise of the allograft. Clearly, the sole reliance on the change of clinical markers such as creatinine, proteinuria, and estimated GFR is not adequate. A pragmatic, clinically applicable method to identify molecular markers of subclinical rejection has not yet been systematically demonstrated in controlled trials. Until this happens, the selective use of protocol biopsies remains a clinically useful tool. There is experimental and clinical evidence that the use of CNIs is associated with a profibrotic cytokine profile (TGF-β1 and tissue inhibitor of metalloproteinases) that leads to interstitial fibrosis as well as its abrogation by angiotensin II blockade (3). Even though the use of ACEIs and/or ARBs has not gained wide acceptance in the treatment of renal transplant recipients who are treated with CNIs (because of a potential for decreased renal blood flow and GFR associated with a single kidney), the findings by Rush et al. call for the execution of randomized, controlled clinical trials using this type of pharmacologic agent.
References
- Copyright © 2010 by the American Society of Nephrology