enfugue
DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++
Size: 1792 x 2304
AI Model: DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++
Created date: December 26, 2023, 7:39 PM
Picture ID: 65b95e8ef7254048911f69a6