enfugue
DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++
Size: 832 x 1216
AI Model: DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++
Created date: January 17, 2024, 10:59 PM
Picture ID: 65b95eb4f725404891235a4b