---
license: apache-2.0
model_name: Jhilik Mullick
tags:
- lora
- flux-dev
- image-generation
- fine-tuning
- safetensors
datasets: []
language: []
metrics: []
library_name: diffusers
pipeline_tag: text-to-image
---
model_card:
model_id: Jhilik Mullick
description: |
Jhilik Mullick is a LoRA (Low-Rank Adaptation) model fine-tuned on the Flux Dev base model, designed for text-to-image generation. It is stored in the `.safetensors` format for efficient and secure weight storage.
model_details:
developed_by: Jhilik Mullick
funded_by: [More Information Needed]
shared_by: Jhilik Mullick
model_type: LoRA (Low-Rank Adaptation) for fine-tuning
languages: Not applicable
license: Apache-2.0
finetuned_from: Flux Dev
version: 1.0
date: 2025-06-15
model_sources:
repository: [More Information Needed]
paper: None
demo: [More Information Needed]
uses:
direct_use: |
The model can be used directly for generating images from text prompts using the Flux Dev pipeline with the LoRA weights applied. Suitable for creative applications, research, or prototyping.
downstream_use: |
The model can be further fine-tuned or integrated into larger applications, such as art generation tools, design software, or creative platforms.
out_of_scope_use: |
- Generating harmful, offensive, or misleading content.
- Real-time applications without optimized hardware due to potential latency.
- Tasks outside the scope of the Flux Dev base model’s capabilities, such as text generation.
bias_risks_limitations:
bias: |
The model may inherit biases from the Flux Dev base model or the fine-tuning dataset, potentially affecting output fairness or quality.
risks: |
Improper use could lead to generating inappropriate content. Users must validate outputs for sensitive applications.
limitations: |
- Performance depends on prompt quality and relevance.
- High computational requirements for inference (recommended: 8GB+ VRAM).
- Limited testing in edge cases or specific domains.
recommendations: |
Users should evaluate outputs for biases and appropriateness. For sensitive applications, implement additional filtering or validation. More information is needed to provide specific mitigation strategies.
how_to_get_started:
code: |
```python
from diffusers import DiffusionPipeline
import torch
# Load base model
base_model = DiffusionPipeline.from_pretrained("flux-dev")
# Load LoRA weights
base_model.load_lora_weights("path/to/jhilik_mullick.safetensors")
# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
base_model.to(device)
# Example inference
output = base_model("your prompt here").images[0]
output.save("output.png")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rstudioModel/jhilik_mallick
Base model
black-forest-labs/FLUX.1-dev