The rise of Integrated Application Workflows (IAWs) for processing data prior to storage on persistent media prompts the need to incorporate features that reproduce many of the semantics of persistent storage devices. One such feature is the ability to manage data sets as chunks with natural barriers between different data sets. Towards that end, we need a mechanism to ensure that data moved to an intermediate storage area is both complete and correct before allowing access by other processing components. The Dou- bly Distributed Transactions (D2T) protocol offers such a mechanism. The initial development [9] suffered from scal- ability limitations and undue requirements on server processes. The current version has addressed these limitations and has demonstrated scalability with low overhead.