Physically-based hair simulation and reconstruction play a crucial role in computer graphics applications, yet optimizing hair properties to match real-world observations remains challenging. We present DiffHair, a novel differentiable framework that combines XPBD (Extended Position-Based Dynamics) hair simulation with differentiable rendering to jointly optimize for the rest shape and material parameters of hair geometry. Our approach introduces a GPU-accelerated differentiable XPBD solver that efficiently handles the complex dynamics of hair, including stretching, bending, and twisting energies. By making both the simulation and rendering stages differentiable, our framework enables gradient-based optimization of hair parameters directly from multi-view image observations. We derive analytical gradients for the XPBD iterative update steps and implement a robust solver using the NVIDIA Warp framework, achieving significant performance improvements through GPU parallelization. Through extensive evaluation, we demonstrate that our method can accurately recover physical hair properties while maintaining stability across diverse hair configurations and boundary conditions. Our results show that DiffHair successfully bridges the gap between simulated and observed hair appearances, providing a powerful tool for various applications in computer animation and digital human creation. The effectiveness of our approach is validated through quantitative comparisons and real-world examples.