czczup commited on
Commit
1166d67
1 Parent(s): de43c5d

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -352,6 +352,35 @@ print(f'User: {question}')
352
  print(f'Assistant: {response}')
353
  ```
354
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
355
  ## Finetune
356
 
357
  SWIFT from ModelScope community has supported the fine-tuning (Image/Video) of InternVL, please check [this link](https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md) for more details.
 
352
  print(f'Assistant: {response}')
353
  ```
354
 
355
+ ### Streaming output
356
+
357
+ Besides this method, you can also use the following code to get streamed output.
358
+
359
+ ```python
360
+ from transformers import TextIteratorStreamer
361
+ from threading import Thread
362
+
363
+ # Initialize the streamer
364
+ streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
365
+ # Define the generation configuration
366
+ generation_config = dict(num_beams=1, max_new_tokens=1024, do_sample=False, streamer=streamer)
367
+ # Start the model chat in a separate thread
368
+ thread = Thread(target=model.chat, kwargs=dict(
369
+ tokenizer=tokenizer, pixel_values=pixel_values, question=question,
370
+ history=None, return_history=False, generation_config=generation_config,
371
+ ))
372
+ thread.start()
373
+
374
+ # Initialize an empty string to store the generated text
375
+ generated_text = ''
376
+ # Loop through the streamer to get the new text as it is generated
377
+ for new_text in streamer:
378
+ if new_text == model.conv_template.sep:
379
+ break
380
+ generated_text += new_text
381
+ print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
382
+ ```
383
+
384
  ## Finetune
385
 
386
  SWIFT from ModelScope community has supported the fine-tuning (Image/Video) of InternVL, please check [this link](https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md) for more details.