--- license: apache-2.0 tags: - unsloth - text-generation - ui-design --- # The model is designed to generate UI Design using html,css and js from given prompt ## Inference Code ```python pip install unsloth alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" FastLanguageModel.for_inference(model) # Enable native 2x faster inference inputs = tokenizer( [ alpaca_prompt.format( "Generate a soft UI login form with a focus on tactile and gentle visual feedback.", # instruction "", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 4096, use_cache = True) tokenizer.batch_decode(outputs) ``` ## Using Streaming ```python from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "imranali291/UIGEN_Qwen2.5-Coder-7B-Instruct-bnb-4bit", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference # alpaca_prompt = You MUST copy from above! inputs = tokenizer( [ alpaca_prompt.format( "What is a famous tall tower in Paris?", # instruction "", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128) ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a64cad668b3906b87207df/YmdigheFcMyHZjDoMn1DY.png)