Implications of Hemodialysis Modeling for Large Animal Studies
Down the road, we’re going to move towards large-animal studies to test our hemodialyzers, since a large animal model (either a sheep or a pig, preferably the former) allows us to model the effects of complete kidney failure (as opposed to just uremia in rats) and work on a scale that would be comparable to that of a human.
Through the course of my dialysis modeling work that will be the focus of my upcoming paper, I have arrived at a number of conclusions about what a reasonably comprehensive animal study nearing readiness for stage one clinical trial would need to look like. While it should be expected that these conclusions will continue to evolve and that the final device may look very different from what we conceive of now, it may be useful at this stage to consider what we know and where we’re going.
Urea and Uremic Toxins
One of the most fundamental questions that needs answering is: what does it mean to effectively treat kidney failure with dialysis? Put another way, what are the clearance characteristics of a successful hemodialyzer?
Clinically, by far the most universal measure of dialysis adequacy is the clearance of urea, expressed by the fraction Kt/V = ln(C0/C); a comparison between before- and after-dialysis urea concentration in the patient’s serum. Numerous studies have been performed correlating patient outcome and symptoms with Kt/V values to arrive at an empirically-determined “optimal” target of Kt/V = 1.2 (equal to approximately 70% reduction in serum urea) per treatment, with four treatments per week. If you ask a clinical nephrologist what adequate dialysis means, you will be given this number.
However, more rigorous analysis of the actual mechanisms of “uremic toxicity,” the sum total of all symptoms associated with toxin buildup due to kidney failure, reveals that urea is itself almost entirely non-toxic, dwarfed in its relevance as a toxin by middle-sized and protein-bound serum solutes such as beta-2-microglobulin or p-cresyl sulfate.
“Dialyzer clearance of urea, a surrogate toxin, is the currently accepted best measure of dialysis and dialysis adequacy, but it is admittedly a compromise due to current lack of knowledge about and inability to measure more toxic solutes. This failure could be explained if uremic toxicity is actually a summation effect of multiple toxins, each at individual subtoxic levels in the patient. Other solutes could be used as surrogates to measure clearance, but urea happens to be available in high concentrations, is easily measured by all clinical laboratories, and is easily dialyzed, so changes in concentration are sensitive indicators of clearance. Measurements of creatinine clearance are confounded by the disequilibrium that occurs across red cells within the dialyzer and in the patient. Other solutes probably behave more like creatinine than urea, so urea stands out as uniquely diffusible, a property that actually spoils its effectiveness as a surrogate toxin, especially when applied to more frequent and continuous dialysis.”
(Depner, T.A. “Uremic toxicity: urea and beyond.” 2001. Semin. Dial. 14(4): 246-51.)
So while urea is a fine marker for dialysis adequacy when patients are dialyzed by traditional means, a change in membrane means a change in the relationship between urea clearance and the clearance of other, more important toxins. If we are to meaningfully demonstrate the adequacy of a hemodialyzer incorporating our nanomembrane material, we will need to show a desirable reduction in toxin concentrations for a much wider array of molecules. Some particularly valuable targets beyond urea may include:
- Beta-2-Microglobulin
- Tumor Necrosis Factor Alpha
- Creatinine
- Serum Albumin (a binding partner of many smaller toxins)
- Parathyroid Hormone
and many others, as is practical. Not all of these solutes should be cleared zealously — a deft approach may be required for some solutes to avoid clearing too much (albumin comes to mind.) Particularly valuable references in determining what solutes may be considered include Dhondt, 2000 and Vanholder, 2003.
Blood Flow Rate and Vascular Access
Our initial thinking (and our intuitions) had us approaching miniaturized dialysis as a low-volume, low-flow, high-efficiency affair. This makes sense given what we already know about our membranes — because they clear solutes so quickly, we should be able to move all of the solute we want into the dialysate in one or two passes, allowing us to take our time and access the patient’s blood in a comfortable low-flow capacity. However, one of the most important conclusions of my modeling is that this approach will not be effective in treating kidney failure in humans (or in large animals.)
Without diving in too far on the theory, we can think of the total rate of clearance of a given solute as the product of two values: the fraction of the solute cleared from the volume per pass, and the volumetric flow rate. At steady-state, this product will be equivalent to some value depending on the rate of generation of this particular solute and the concentration. Taking urea as an example, we need our device to satisfy:
For a miniaturized device (something that could be worn by a patient,) even with an extremely narrow channel to increase the fractional clearance, you won’t get close to 100% clearance at 2.3 L/hr. In order to satisfy this relationship, then, we have to increase the flow rate Q, in turn reducing the fractional clearance (as each pass is quicker) until we finally get there. The point where this finally starts to happen is on the order of 1 L/min for a miniature device.
This is not low-flow, and by the logic just described, low-flow is not feasible for a wearable device (at least at this scale of membrane — approximately 1-2 square inches.) Depending on the exact device design, however, this flow rate may in fact be attainable with only the arteriovenous pressure drop created by the patient’s own heart. Fortunately, this increased cardiac demand should pose little risk to patients without otherwise compromised hearts (MacRae, 2004).
Device Size and Fabrication
As mentioned in the previous paragraph, the necessary size for a nanomembrane in hemodialysis is quite small. My current “best guess” at the optimal design has just 9 square centimeters (~1.4 square inches) of active membrane area, which should be easily attainable with lift-off, trench chips, or even traditionally etched chips.
Another consideration is blood channel thickness. Dialyzer clearance is highly dependent on the thickness of the channel from wall-to-membrane, and this relationship is non-linear: each decrease in thickness is more effective in increasing clearance than the last. For this reason, we are highly incentivized to create a very thin channel in order to maximize dialyzer efficiency. The number I’ve settled on at this point is 50 microns, far too thin for our typical approach to silicone microfluidics. The channel will either have to be etched into another material, or the membrane will have to be supported by short posts (e.g., oxide deposited on a chip surface.) We will have to develop the ability to support high flow through channels of this or comparable thickness if we are to achieve the dosage of dialysis that will be required by a large animal.
The upside of this, of course, is that our membranes are so incredibly permeable that we can replace human kidney function with a device that could fit into the patient’s pocket — and do a much better job of it than the giant machines used for that job today. And if that doesn’t impress the NSF, nothing will.
