Ashhar commited on
Commit
7c9530c
β€’
1 Parent(s): 0d00aee

story loading support

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: Kommuneity Story Creator
3
- emoji: πŸ“–
4
  colorFrom: gray
5
  colorTo: green
6
  sdk: streamlit
 
1
  ---
2
  title: Kommuneity Story Creator
3
+ emoji: πŸͺ„
4
  colorFrom: gray
5
  colorTo: green
6
  sdk: streamlit
app.py CHANGED
@@ -5,86 +5,115 @@ import pytz
5
  import time
6
  import json
7
  import re
8
- from typing import List
9
  from transformers import AutoTokenizer
10
  from gradio_client import Client
 
 
 
 
 
11
 
12
  from dotenv import load_dotenv
13
  load_dotenv()
14
 
15
- useGpt4 = os.environ.get("USE_GPT_4") == "1"
16
-
17
- if useGpt4:
18
- from openai import OpenAI
19
- client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
20
- MODEL = "gpt-4o-mini"
21
- MAX_CONTEXT = 128000
22
- tokenizer = AutoTokenizer.from_pretrained("Xenova/gpt-4o")
23
- else:
24
- from groq import Groq
25
- client = Groq(
26
- api_key=os.environ.get("GROQ_API_KEY"),
27
- )
28
- MODEL = "llama-3.1-70b-versatile"
29
- MAX_CONTEXT = 8000
30
- tokenizer = AutoTokenizer.from_pretrained("Xenova/Meta-Llama-3.1-Tokenizer")
31
 
 
 
32
 
33
  JSON_SEPARATOR = ">>>>"
 
34
 
 
 
 
 
35
 
36
- def countTokens(text):
37
- text = str(text)
38
- tokens = tokenizer.encode(text, add_special_tokens=False)
39
- return len(tokens)
 
40
 
 
 
 
 
 
41
 
42
- SYSTEM_MSG = f"""
43
- You're an storytelling assistant who guides users through four phases of narrative development, helping them craft compelling personal or professional stories. The story created should be in simple language, yet evoke great emotions.
44
- Ask one question at a time, give the options in a numbered and well formatted manner in different lines
45
- If your response has number of options to choose from, only then append your final response with this exact keyword "{JSON_SEPARATOR}", and only after this, append with the JSON of options to choose from. The JSON should be of the format:
46
  {{
47
- "options": [
48
- {{ "id": "1", "label": "Option 1"}},
49
- {{ "id": "2", "label": "Option 2"}}
50
- ]
51
  }}
 
52
  Do not write "Choose one of the options below:"
53
  Keep options to less than 9.
54
- Summarise options chosen so far in each step.
55
 
56
- # Tier 1: Story Creation
 
 
 
 
 
 
 
 
 
 
 
57
  You initiate the storytelling process through a series of engaging prompts:
58
- Story Origin:
59
- Asks users to choose between personal anecdotes or adapting a well-known story (creating a story database here of well-known stories to choose from).
60
 
61
- Story Use Case:
62
- Asks users to define the purpose of building a story (e.g., profile story, for social media content).
 
63
 
64
- Story Time Frame:
 
 
 
 
 
 
 
 
65
  Allows story selection from various life stages (childhood, mid-career, recent experiences).
66
  Or Age-wise (below 8, 8-13, 13-15 and so on).
67
 
68
- Story Focus:
69
- Prompts users to select behaviours or leadership qualities to highlight in the story.
70
- Provides a list of options based on common leadership traits:
71
- (Generosity / Integrity / Loyalty / Devotion / Kindness / Sincerity / Self-control / Confidence / Persuasiveness / Ambition / Resourcefulness / Decisiveness / Faithfulness / Patience / Determination / Persistence / Fairness / Cooperation / Optimism / Proactive / Charisma / Ethics / Relentlessness / Authority / Enthusiasm / Boldness)
72
-
73
- Story Type:
 
 
 
 
 
 
 
74
  Prompts users to select the kind of story they want to tell:
75
- Where we came from: A founding Story
76
- Why we can't stay here: A case-for-change story
77
- Where we're going: A vision story
78
- How we're going to get there: A strategy story
79
- Why I lead the way I do: Leadership philosophy story
80
- Why you should want to work here: A rallying story
81
- Personal stories: Who you are, what you do, how you do it, and who you do it for
82
- What we believe: A story about values
83
- Who we serve: A customer story
84
- What we do for our customers: A sales story
85
- How we're different: A marketing story
86
-
87
- Guided Storytelling Framework:
88
  You then lead users through a structured narrative development via the following prompts:
89
  - Describe the day it happened
90
  - What was the Call to Action / Invitation
@@ -94,14 +123,16 @@ You then lead users through a structured narrative development via the following
94
  - Detailing the resolution / Reaching the final goal
95
  - Reflecting on personal growth or lessons learned (What did you do that changed your life forever?)
96
 
97
- Now, show the story created so far, and ask for confirmation before proceeding to the next tier.
 
 
98
 
99
- # Tier 2: Story Enhancement
100
- After initial story creation, you offer congratulations on completing the first draft and gives 2 options:
101
  Option 1 - Provides option for one-on-one sessions with expert storytelling coaches - the booking can be done that at https://calendly.com/
102
  Options 2 - Provides further options for introducing users to more sophisticated narratives.
103
 
104
- If Option 2 chosen, show these options with simple explanation and chose one.
105
  You take the story and integrates it into different options of storytelling narrative structure:
106
  The Story Hanger
107
  The Story Spine
@@ -114,26 +145,67 @@ The Cliffhanger
114
  After taking user's preference, you show the final story and ask for confirmation before moving to the next tier.
115
  Allow them to iterate over different narratives to see what fits best for them.
116
 
117
- # Tier 3: Story Polishing
 
118
  The final phase focuses on refining the narrative further:
119
- You add suggestions to the story:
120
- Impactful quotes/poems / similes/comparisons
121
- Creative enhancements:
122
- Some lines or descriptions for inspiration
123
- Tips for maximising emotional resonance and memorability
 
 
124
  By guiding users through these three tiers, you aim to cater to novice storytellers, offering a comprehensive platform for narrative skill development through its adaptive approach.
125
- You end it with the final story and seeking any suggestions from the user to refine the story further.
 
126
  Once the user confirms, you congratulate them with emojis on completing the story and provide the final story in a beatifully formatted manner.
127
  Note that the final story should include twist, turns and events that make it really engaging and enjoyable to read.
128
 
129
  """
130
 
131
- USER_ICON = "man.png"
132
- AI_ICON = "Kommuneity.png"
133
- IMAGE_LOADER = "ripple.svg"
134
- TEXT_LOADER = "balls.svg"
 
135
  START_MSG = "I want to create a story 😊"
136
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137
  st.set_page_config(
138
  page_title="Kommuneity Story Creator",
139
  page_icon=AI_ICON,
@@ -152,6 +224,7 @@ def pprint(log: str):
152
  print(f"[{now}] [{ipAddress}] {log}")
153
 
154
 
 
155
  pprint("\n")
156
 
157
  st.markdown(
@@ -180,6 +253,11 @@ st.markdown(
180
  font-family: 'Courier New', Courier, monospace; /* Monospace font */
181
  }
182
 
 
 
 
 
 
183
  </style>
184
  """,
185
  unsafe_allow_html=True
@@ -199,9 +277,19 @@ def __isInvalidResponse(response: str):
199
  if len(re.findall(r'\n\n', response)) > 15:
200
  return True
201
 
 
 
 
 
202
  # json response without json separator
203
  if ('{\n "options"' in response) and (JSON_SEPARATOR not in response):
204
  return True
 
 
 
 
 
 
205
 
206
 
207
  def __matchingKeywordsCount(keywords: List[str], text: str):
@@ -219,7 +307,7 @@ def __isStringNumber(s: str) -> bool:
219
  return False
220
 
221
 
222
- def __getImagePromptDetails(prompt: str, response: str):
223
  regex = r'[^a-z0-9 \n\.\-]|((the) +)'
224
 
225
  cleanedResponse = re.sub(regex, '', response.lower())
@@ -230,21 +318,21 @@ def __getImagePromptDetails(prompt: str, response: str):
230
 
231
  if (
232
  __matchingKeywordsCount(
233
- ["adapt", "profile", "social media", "purpose", "use case"],
234
  cleanedResponse
235
  ) > 2
236
- and not __isStringNumber(prompt)
237
- and cleanedPrompt in cleanedResponse
238
  and "story so far" not in cleanedResponse
239
  ):
240
  return (
241
- f'''
242
- Subject: {prompt}.
243
- Style: Fantastical, in a storybook, surreal, bokeh
244
- ''',
245
- "Painting your character ..."
246
  )
247
 
 
 
 
 
248
  '''
249
  Mood: ethereal lighting that emphasizes the fantastical nature of the scene.
250
 
@@ -268,37 +356,67 @@ def __getImagePromptDetails(prompt: str, response: str):
268
  relevantResponse = response[:storyEndIdx]
269
  pprint(f"{relevantResponse=}")
270
  return (
271
- f"photo of a scene from this text: {relevantResponse}",
272
- "Imagining your scene (beta) ..."
 
 
 
273
  )
 
 
 
274
 
275
- return (None, None)
276
-
277
-
278
- def __resetButtonState():
279
- st.session_state["buttonValue"] = ""
280
 
281
 
282
- def __setStartMsg(msg):
283
- st.session_state.startMsg = msg
284
-
285
-
286
- if "chatHistory" not in st.session_state:
287
- st.session_state.chatHistory = []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
288
 
289
- if "messages" not in st.session_state:
290
- st.session_state.messages = []
 
 
 
 
 
 
 
 
 
 
291
 
292
- if "buttonValue" not in st.session_state:
293
- __resetButtonState()
294
 
295
- if "startMsg" not in st.session_state:
296
- st.session_state.startMsg = ""
297
 
298
 
299
  def __getMessages():
300
  def getContextSize():
301
- currContextSize = countTokens(SYSTEM_MSG) + countTokens(st.session_state.messages) + 100
302
  pprint(f"{currContextSize=}")
303
  return currContextSize
304
 
@@ -309,30 +427,57 @@ def __getMessages():
309
  return st.session_state.messages
310
 
311
 
312
- def predict():
313
- messagesFormatted = [{"role": "system", "content": SYSTEM_MSG}]
314
- messagesFormatted.extend(__getMessages())
315
- contextSize = countTokens(messagesFormatted)
316
  pprint(f"{contextSize=} | {MODEL}")
 
317
 
318
- response = client.chat.completions.create(
319
- model=MODEL,
320
- messages=messagesFormatted,
321
- temperature=0.8,
322
- max_tokens=4000,
323
- stream=True
324
- )
325
-
326
- chunkCount = 0
327
- for chunk in response:
328
- chunkContent = chunk.choices[0].delta.content
329
- if chunkContent:
330
- chunkCount += 1
331
- yield chunkContent
332
 
 
 
333
 
334
- def generateImage(prompt: str):
335
- pprint(f"imagePrompt={prompt}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
336
  fluxClient = Client("black-forest-labs/FLUX.1-schnell")
337
  result = fluxClient.predict(
338
  prompt=prompt,
@@ -347,10 +492,86 @@ def generateImage(prompt: str):
347
  return result
348
 
349
 
350
- st.title("Kommuneity Story Creator πŸ“–")
351
- if not (st.session_state["buttonValue"] or st.session_state["startMsg"]):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
352
  st.button(START_MSG, on_click=lambda: __setStartMsg(START_MSG))
353
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
354
  for chat in st.session_state.chatHistory:
355
  role = chat["role"]
356
  content = chat["content"]
@@ -361,29 +582,36 @@ for chat in st.session_state.chatHistory:
361
  if imagePath:
362
  st.image(imagePath)
363
 
364
- if prompt := (st.chat_input() or st.session_state["buttonValue"] or st.session_state["startMsg"]):
 
 
 
 
 
 
365
  __resetButtonState()
 
366
  __setStartMsg("")
 
367
 
368
  with st.chat_message("user", avatar=USER_ICON):
369
  st.markdown(prompt)
370
  pprint(f"{prompt=}")
371
- st.session_state.messages.append({"role": "user", "content": prompt})
372
  st.session_state.chatHistory.append({"role": "user", "content": prompt })
 
373
 
374
  with st.chat_message("assistant", avatar=AI_ICON):
375
  responseContainer = st.empty()
376
 
377
  def __printAndGetResponse():
378
  response = ""
379
- # responseContainer.markdown(".....")
380
  responseContainer.image(TEXT_LOADER)
381
  responseGenerator = predict()
382
 
383
  for chunk in responseGenerator:
384
  response += chunk
385
  if __isInvalidResponse(response):
386
- pprint(f"{response=}")
387
  return
388
 
389
  if JSON_SEPARATOR not in response:
@@ -425,34 +653,40 @@ if prompt := (st.chat_input() or st.session_state["buttonValue"] or st.session_s
425
  )
426
  # imgContainer.markdown(f"`{loaderText}`")
427
  imgContainer.image(IMAGE_LOADER)
428
- (imagePath, seed) = generateImage(imagePrompt)
429
  imageContainer.image(imagePath)
430
  except Exception as e:
431
  pprint(e)
432
  imageContainer.empty()
433
 
 
 
 
 
 
 
 
 
 
 
434
  if jsonStr:
435
  try:
436
  json.loads(jsonStr)
437
  jsonObj = json.loads(jsonStr)
438
- options = jsonObj["options"]
439
-
440
- for option in options:
441
- st.button(
442
- option["label"],
443
- key=option["id"],
444
- on_click=lambda label=option["label"]: selectButton(label)
445
- )
446
- # st.code(jsonStr, language="json")
 
 
 
 
 
447
  except Exception as e:
448
  pprint(e)
449
 
450
- st.session_state.messages.append({
451
- "role": "assistant",
452
- "content": response,
453
- })
454
- st.session_state.chatHistory.append({
455
- "role": "assistant",
456
- "content": response,
457
- "image": imagePath,
458
- })
 
5
  import time
6
  import json
7
  import re
8
+ from typing import List, Literal, TypedDict, Tuple
9
  from transformers import AutoTokenizer
10
  from gradio_client import Client
11
+ from data import storiesDb
12
+
13
+ from openai import OpenAI
14
+ import anthropic
15
+ from groq import Groq
16
 
17
  from dotenv import load_dotenv
18
  load_dotenv()
19
 
20
+ ModelType = Literal["GPT4", "CLAUDE", "LLAMA"]
21
+ ModelConfig = TypedDict("ModelConfig", {
22
+ "client": OpenAI | Groq | anthropic.Anthropic,
23
+ "model": str,
24
+ "max_context": int,
25
+ "tokenizer": AutoTokenizer
26
+ })
 
 
 
 
 
 
 
 
 
27
 
28
+ modelType: ModelType = os.environ.get("MODEL_TYPE") or "LLAMA"
29
+ modelType: ModelType = "CLAUDE"
30
 
31
  JSON_SEPARATOR = ">>>>"
32
+ EXCEPTION_KEYWORD = "<<EXCEPTION>>"
33
 
34
+ SYSTEM_MSG = f"""
35
+ => Context:
36
+ You're an storytelling assistant who guides users through four phases of narrative development, helping them craft compelling personal or professional stories.
37
+ The story created should be in simple language, yet evoke great emotions.
38
 
39
+ -----
40
+ => Key Points:
41
+ Ask one question at a time, give the options in a numbered and well formatted manner in different lines.
42
+ Summarise options chosen so far in each step.
43
+ Every response should have a question unless it's the end of flow.
44
 
45
+ -----
46
+ => Format & Syntax:
47
+ Whenever any of the below rules are satisfied, then append your FINAL response with this exact keyword "{JSON_SEPARATOR}", and only AFTER this, append with a JSON described in the matching rule below.
48
+ Apply at most one rule at a time, the most relevant one.
49
+ Do not write anything after the JSON
50
 
51
+ - Rule 1: If your response has multiple numbered options to choose from, append JSON in this format (alway check for this rule):
52
+ ```
 
 
53
  {{
54
+ "options": [{{ "id": "1", "label": "Option 1"}}, {{ "id": "2", "label": "Option 2"}}]
 
 
 
55
  }}
56
+ ```
57
  Do not write "Choose one of the options below:"
58
  Keep options to less than 9.
 
59
 
60
+ - Rule 2: If the USER has chosen to adapt a well known story, append this JSON:
61
+ ```
62
+ {{
63
+ "action": "SHOW_STORY_DATABASE"
64
+ }}
65
+ ```
66
+ ------
67
+ => Task Definition:
68
+ You take the user through a flow of questions as defined below. You'll navigate the user through three tiers, moving closer to the final story.
69
+ Before giving any response, make sure to evaluate the "Format" rules described above.
70
+
71
+ ## Tier 1: Story Creation
72
  You initiate the storytelling process through a series of engaging prompts:
 
 
73
 
74
+ #### Story Origin:
75
+ - Asks users to choose between personal anecdotes or adapting a well-known real story
76
+ - If they choose to choose to adapt from a well known story, show them a database of the stories to choose from
77
 
78
+ #### Story Use Case:
79
+ Asks users to define the purpose of building a story. It can be one of the following (provide very short description for each):
80
+ - Personal Branding: To create a narrative that highlights an individual's unique experiences, skills, and values for use in professional networking, job applications, or personal websites.
81
+ - Company Origin: To craft a compelling narrative about how a company or organization was founded, its mission, and key milestones for use in marketing materials or investor presentations.
82
+ - Product Launch: To develop an engaging narrative around a new product or service, focusing on the problem it solves and its unique value proposition for use in marketing campaigns or sales pitches.
83
+ - Customer Success / Testimonials: To showcase how a product or service has positively impacted a customer's life or business, creating a relatable narrative for potential customers.
84
+ - Team Building: To create a shared narrative that reinforces company values, promotes team cohesion, or introduces new team members, for use in internal communications or team-building exercises.
85
+
86
+ #### Story Time Frame:
87
  Allows story selection from various life stages (childhood, mid-career, recent experiences).
88
  Or Age-wise (below 8, 8-13, 13-15 and so on).
89
 
90
+ #### Story Focus:
91
+ Prompts users to select behaviours or leadership qualities to highlight in the story. Allow users to choose upto 3-5 qualities.
92
+ - Resourcefulness (ability to find creative solutions)
93
+ - Sincerity (genuine and honest in intentions and words)
94
+ - Decisiveness (ability to make firm and timely decisions)
95
+ - Kindness (concern and compassion for others' well-being)
96
+ - Ambition (drive to achieve goals and succeed)
97
+ - Patience (ability to endure difficult situations calmly)
98
+ - Boldness (willingness to take risks and speak up)
99
+ - Fairness (commitment to justice and equal treatment)
100
+ - Proactive (taking initiative and anticipating challenges)
101
+
102
+ #### Story Type:
103
  Prompts users to select the kind of story they want to tell:
104
+ - Where we came from: A founding Story
105
+ - Why we can't stay here: A case-for-change story
106
+ - Where we're going: A vision story
107
+ - How we're going to get there: A strategy story
108
+ - Why I lead the way I do: Leadership philosophy story
109
+ - Why you should want to work here: A rallying story
110
+ - Personal stories: Who you are, what you do, how you do it, and who you do it for
111
+ - What we believe: A story about values
112
+ - Who we serve: A customer story
113
+ - What we do for our customers: A sales story
114
+ - How we're different: A marketing story
115
+
116
+ #### Guided Storytelling Framework:
117
  You then lead users through a structured narrative development via the following prompts:
118
  - Describe the day it happened
119
  - What was the Call to Action / Invitation
 
123
  - Detailing the resolution / Reaching the final goal
124
  - Reflecting on personal growth or lessons learned (What did you do that changed your life forever?)
125
 
126
+ Now, show the story created so far using Story-Spine structure as the default style, and then ask for confirmation before proceeding to the next tier.
127
+ If the user has any suggestions, incorporate them and then show the story again.
128
+
129
 
130
+ ## Tier 2: Story Enhancement
131
+ #### After initial story creation, you offer congratulations on completing the first draft and gives 2 options:
132
  Option 1 - Provides option for one-on-one sessions with expert storytelling coaches - the booking can be done that at https://calendly.com/
133
  Options 2 - Provides further options for introducing users to more sophisticated narratives.
134
 
135
+ #### If Option 2 chosen, show these options with simple explanation and chose one.
136
  You take the story and integrates it into different options of storytelling narrative structure:
137
  The Story Hanger
138
  The Story Spine
 
145
  After taking user's preference, you show the final story and ask for confirmation before moving to the next tier.
146
  Allow them to iterate over different narratives to see what fits best for them.
147
 
148
+
149
+ ## Tier 3: Story Polishing
150
  The final phase focuses on refining the narrative further:
151
+ - You add suggestions to the story:
152
+ - Impactful quotes/poems / similes/comparisons
153
+
154
+ #### Creative enhancements:
155
+ - Some lines or descriptions for inspiration
156
+ - Tips for maximising emotional resonance and memorability
157
+
158
  By guiding users through these three tiers, you aim to cater to novice storytellers, offering a comprehensive platform for narrative skill development through its adaptive approach.
159
+ You end it with the final story and seek any suggestions from the user to refine the story further.
160
+
161
  Once the user confirms, you congratulate them with emojis on completing the story and provide the final story in a beatifully formatted manner.
162
  Note that the final story should include twist, turns and events that make it really engaging and enjoyable to read.
163
 
164
  """
165
 
166
+ USER_ICON = "icons/man.png"
167
+ AI_ICON = "icons/Kommuneity.png"
168
+ IMAGE_LOADER = "icons/Wedges.svg"
169
+ TEXT_LOADER = "icons/balls.svg"
170
+ DB_LOADER = "icons/db_loader.svg"
171
  START_MSG = "I want to create a story 😊"
172
 
173
+
174
+ MODEL_CONFIG: dict[ModelType, ModelConfig] = {
175
+ "GPT4": {
176
+ "client": OpenAI(api_key=os.environ.get("OPENAI_API_KEY")),
177
+ "model": "gpt-4o-mini",
178
+ "max_context": 128000,
179
+ "tokenizer": AutoTokenizer.from_pretrained("Xenova/gpt-4o")
180
+ },
181
+ "CLAUDE": {
182
+ "client": anthropic.Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY")),
183
+ "model": "claude-3-sonnet-20240229",
184
+ "max_context": 128000,
185
+ "tokenizer": AutoTokenizer.from_pretrained("Xenova/claude-tokenizer")
186
+ },
187
+ "LLAMA": {
188
+ "client": Groq(api_key=os.environ.get("GROQ_API_KEY")),
189
+ "model": "llama-3.1-70b-versatile",
190
+ "max_context": 128000,
191
+ "tokenizer": AutoTokenizer.from_pretrained("Xenova/Meta-Llama-3.1-Tokenizer")
192
+ }
193
+ }
194
+
195
+ client = MODEL_CONFIG[modelType]["client"]
196
+ MODEL = MODEL_CONFIG[modelType]["model"]
197
+ MAX_CONTEXT = MODEL_CONFIG[modelType]["max_context"]
198
+ tokenizer = MODEL_CONFIG[modelType]["tokenizer"]
199
+
200
+ isClaudeModel = modelType == "CLAUDE"
201
+
202
+
203
+ def __countTokens(text):
204
+ text = str(text)
205
+ tokens = tokenizer.encode(text, add_special_tokens=False)
206
+ return len(tokens)
207
+
208
+
209
  st.set_page_config(
210
  page_title="Kommuneity Story Creator",
211
  page_icon=AI_ICON,
 
224
  print(f"[{now}] [{ipAddress}] {log}")
225
 
226
 
227
+ pprint("\n")
228
  pprint("\n")
229
 
230
  st.markdown(
 
253
  font-family: 'Courier New', Courier, monospace; /* Monospace font */
254
  }
255
 
256
+ div[aria-label="dialog"] {
257
+ width: 80vw;
258
+ height: 600px;
259
+ }
260
+
261
  </style>
262
  """,
263
  unsafe_allow_html=True
 
277
  if len(re.findall(r'\n\n', response)) > 15:
278
  return True
279
 
280
+ # LLM API threw exception
281
+ if EXCEPTION_KEYWORD in response:
282
+ return True
283
+
284
  # json response without json separator
285
  if ('{\n "options"' in response) and (JSON_SEPARATOR not in response):
286
  return True
287
+ if ('{\n "action"' in response) and (JSON_SEPARATOR not in response):
288
+ return True
289
+
290
+ # only options with no text
291
+ if response.startswith(JSON_SEPARATOR):
292
+ return True
293
 
294
 
295
  def __matchingKeywordsCount(keywords: List[str], text: str):
 
307
  return False
308
 
309
 
310
+ def __getRawImagePromptDetails(prompt: str, response: str) -> Tuple[str, str, str]:
311
  regex = r'[^a-z0-9 \n\.\-]|((the) +)'
312
 
313
  cleanedResponse = re.sub(regex, '', response.lower())
 
318
 
319
  if (
320
  __matchingKeywordsCount(
321
+ ["adapt", "personal branding", "purpose", "use case"],
322
  cleanedResponse
323
  ) > 2
 
 
324
  and "story so far" not in cleanedResponse
325
  ):
326
  return (
327
+ f"Extract the name of selected story from this text and add few more details about this story:\n{response}",
328
+ "Effect: bokeh",
329
+ "Painting your character ...",
 
 
330
  )
331
 
332
+ '''
333
+ Style: Fantastical, in a storybook, surreal, bokeh
334
+ '''
335
+
336
  '''
337
  Mood: ethereal lighting that emphasizes the fantastical nature of the scene.
338
 
 
356
  relevantResponse = response[:storyEndIdx]
357
  pprint(f"{relevantResponse=}")
358
  return (
359
+ "Extract the story plot from this text:\n{response}",
360
+ """
361
+ Style: In a storybook, surreal
362
+ """,
363
+ "Imagining your scene (beta) ...",
364
  )
365
+ """
366
+ photo of a scene from this text: {relevantResponse}.
367
+ """
368
 
369
+ return (None, None, None)
 
 
 
 
370
 
371
 
372
+ def __getImagePromptDetails(prompt: str, response: str):
373
+ (enhancePrompt, imagePrompt, loaderText) = __getRawImagePromptDetails(prompt, response)
374
+
375
+ if imagePrompt or enhancePrompt:
376
+ pprint(f"[Raw] {enhancePrompt=} | {imagePrompt=}")
377
+
378
+ promptEnhanceModelType: ModelType = "LLAMA"
379
+ pprint(f"{promptEnhanceModelType=}")
380
+
381
+ modelConfig = MODEL_CONFIG[promptEnhanceModelType]
382
+ client = modelConfig["client"]
383
+ model = modelConfig["model"]
384
+ isClaudeModel = promptEnhanceModelType == "CLAUDE"
385
+
386
+ systemPrompt = "You help in creating prompts for image generation"
387
+ promptPrefix = f"{enhancePrompt}\nAnd then use the above to" if enhancePrompt else "Use the text below to"
388
+
389
+ llmArgs = {
390
+ "model": model,
391
+ "messages": [{
392
+ "role": "user",
393
+ "content": f"{promptPrefix} create a prompt for image generation (limit to less than 500 words)\n\n{imagePrompt}"
394
+ }],
395
+ "temperature": 0.8,
396
+ "max_tokens": 2000
397
+ }
398
 
399
+ if isClaudeModel:
400
+ llmArgs["system"] = systemPrompt
401
+ response = client.messages.create(**llmArgs)
402
+ imagePrompt = response.content[0].text
403
+ else:
404
+ llmArgs["messages"] = [
405
+ {"role": "system", "content": systemPrompt},
406
+ *llmArgs["messages"]
407
+ ]
408
+ response = client.chat.completions.create(**llmArgs)
409
+ responseMessage = response.choices[0].message
410
+ imagePrompt = responseMessage.content
411
 
412
+ pprint(f"[Enhanced] {imagePrompt=}")
 
413
 
414
+ return (imagePrompt, loaderText)
 
415
 
416
 
417
  def __getMessages():
418
  def getContextSize():
419
+ currContextSize = __countTokens(SYSTEM_MSG) + __countTokens(st.session_state.messages) + 100
420
  pprint(f"{currContextSize=}")
421
  return currContextSize
422
 
 
427
  return st.session_state.messages
428
 
429
 
430
+ def __logLlmRequest(messagesFormatted: list):
431
+ contextSize = __countTokens(messagesFormatted)
 
 
432
  pprint(f"{contextSize=} | {MODEL}")
433
+ # pprint(f"{messagesFormatted=}")
434
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
435
 
436
+ def predict():
437
+ messagesFormatted = []
438
 
439
+ try:
440
+ if isClaudeModel:
441
+ messagesFormatted.extend(__getMessages())
442
+ __logLlmRequest(messagesFormatted)
443
+
444
+ with client.messages.stream(
445
+ model=MODEL,
446
+ messages=messagesFormatted,
447
+ system=SYSTEM_MSG,
448
+ max_tokens=4000,
449
+ ) as stream:
450
+ for text in stream.text_stream:
451
+ yield text
452
+ else:
453
+ messagesFormatted.append(
454
+ {"role": "system", "content": SYSTEM_MSG}
455
+ )
456
+ messagesFormatted.extend(__getMessages())
457
+ __logLlmRequest(messagesFormatted)
458
+
459
+ response = client.chat.completions.create(
460
+ model=MODEL,
461
+ messages=messagesFormatted,
462
+ temperature=0.8,
463
+ max_tokens=4000,
464
+ stream=True
465
+ )
466
+
467
+ for chunk in response:
468
+ choices = chunk.choices
469
+ if not choices:
470
+ pprint("Empty chunk")
471
+ continue
472
+ chunkContent = chunk.choices[0].delta.content
473
+ if chunkContent:
474
+ yield chunkContent
475
+ except Exception as e:
476
+ pprint(f"LLM API Error: {e}")
477
+ yield EXCEPTION_KEYWORD
478
+
479
+
480
+ def __generateImage(prompt: str):
481
  fluxClient = Client("black-forest-labs/FLUX.1-schnell")
482
  result = fluxClient.predict(
483
  prompt=prompt,
 
492
  return result
493
 
494
 
495
+ st.title("Kommuneity Story Creator πŸͺ„")
496
+
497
+
498
+ def __resetButtonState():
499
+ st.session_state.buttonValue = ""
500
+
501
+
502
+ def __resetSelectedStory():
503
+ st.session_state.selectedStory = ""
504
+
505
+
506
+ def __setStartMsg(msg):
507
+ st.session_state.startMsg = msg
508
+
509
+
510
+ if "chatHistory" not in st.session_state:
511
+ st.session_state.chatHistory = []
512
+
513
+ if "messages" not in st.session_state:
514
+ st.session_state.messages = []
515
+
516
+ if "buttonValue" not in st.session_state:
517
+ __resetButtonState()
518
+
519
+ if "selectedStory" not in st.session_state:
520
+ st.session_state.selectedStory = ""
521
+
522
+ if "startMsg" not in st.session_state:
523
+ st.session_state.startMsg = ""
524
  st.button(START_MSG, on_click=lambda: __setStartMsg(START_MSG))
525
 
526
+ if "showStoryDbDialog" not in st.session_state:
527
+ st.session_state.showStoryDbDialog = False
528
+
529
+
530
+ def __disableStoryDbDialog():
531
+ if st.session_state.showStoryDbDialog:
532
+ st.session_state.showStoryDbDialog = False
533
+ st.rerun()
534
+
535
+
536
+ def __enableStoryDbDialog():
537
+ if not st.session_state.showStoryDbDialog:
538
+ st.session_state.showStoryDbDialog = True
539
+ st.rerun()
540
+
541
+
542
+ if st.session_state.showStoryDbDialog:
543
+ @st.dialog("Choose a popular story", width="large")
544
+ def __openStoryDbDialog():
545
+ storyPlaceholder = st.empty()
546
+ col1, col2, col3 = storyPlaceholder.columns([1, 1, 1])
547
+ col2.image(DB_LOADER)
548
+ col2.write(
549
+ """
550
+ <div class='blinking code'>
551
+ Loading from database ...
552
+ </div>
553
+ """,
554
+ unsafe_allow_html=True
555
+ )
556
+
557
+ stories = storiesDb.getAllStories()
558
+ with storyPlaceholder.container(border=False, height=500):
559
+ for idx, story in enumerate(stories):
560
+ storyTitle = story['Story Title']
561
+ storyDetails = story['Story Text']
562
+ with st.expander(storyTitle):
563
+ st.markdown(storyDetails)
564
+ if st.button(
565
+ "Select",
566
+ key=f"select_{idx}",
567
+ type="primary",
568
+ use_container_width=True
569
+ ):
570
+ st.session_state.selectedStory = storyTitle
571
+ __disableStoryDbDialog()
572
+
573
+ __openStoryDbDialog()
574
+
575
  for chat in st.session_state.chatHistory:
576
  role = chat["role"]
577
  content = chat["content"]
 
582
  if imagePath:
583
  st.image(imagePath)
584
 
585
+
586
+ if prompt := (
587
+ st.chat_input()
588
+ or st.session_state["buttonValue"]
589
+ or st.session_state["selectedStory"]
590
+ or st.session_state["startMsg"]
591
+ ):
592
  __resetButtonState()
593
+ __resetSelectedStory()
594
  __setStartMsg("")
595
+ __disableStoryDbDialog()
596
 
597
  with st.chat_message("user", avatar=USER_ICON):
598
  st.markdown(prompt)
599
  pprint(f"{prompt=}")
 
600
  st.session_state.chatHistory.append({"role": "user", "content": prompt })
601
+ st.session_state.messages.append({"role": "user", "content": prompt})
602
 
603
  with st.chat_message("assistant", avatar=AI_ICON):
604
  responseContainer = st.empty()
605
 
606
  def __printAndGetResponse():
607
  response = ""
 
608
  responseContainer.image(TEXT_LOADER)
609
  responseGenerator = predict()
610
 
611
  for chunk in responseGenerator:
612
  response += chunk
613
  if __isInvalidResponse(response):
614
+ pprint(f"InvalidResponse={response}")
615
  return
616
 
617
  if JSON_SEPARATOR not in response:
 
653
  )
654
  # imgContainer.markdown(f"`{loaderText}`")
655
  imgContainer.image(IMAGE_LOADER)
656
+ (imagePath, seed) = __generateImage(imagePrompt)
657
  imageContainer.image(imagePath)
658
  except Exception as e:
659
  pprint(e)
660
  imageContainer.empty()
661
 
662
+ st.session_state.chatHistory.append({
663
+ "role": "assistant",
664
+ "content": response,
665
+ "image": imagePath,
666
+ })
667
+ st.session_state.messages.append({
668
+ "role": "assistant",
669
+ "content": response,
670
+ })
671
+
672
  if jsonStr:
673
  try:
674
  json.loads(jsonStr)
675
  jsonObj = json.loads(jsonStr)
676
+ options = jsonObj.get("options")
677
+ action = jsonObj.get("action")
678
+
679
+ if options:
680
+ for option in options:
681
+ st.button(
682
+ option["label"],
683
+ key=option["id"],
684
+ on_click=lambda label=option["label"]: selectButton(label)
685
+ )
686
+ elif action:
687
+ if action == "SHOW_STORY_DATABASE":
688
+ __enableStoryDbDialog()
689
+ # st.code(jsonStr, language="json")
690
  except Exception as e:
691
  pprint(e)
692
 
 
 
 
 
 
 
 
 
 
data/storiesDb.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from supabase import create_client, Client
3
+
4
+ from dotenv import load_dotenv
5
+ load_dotenv()
6
+
7
+ url: str = os.environ.get("SUPABASE_URL")
8
+ key: str = os.environ.get("SUPABASE_KEY")
9
+ supabase: Client = create_client(url, key)
10
+
11
+
12
+ def getAllStories():
13
+ response = supabase.table("existing_stories").select("*").execute()
14
+ return response.data
Kommune.png β†’ icons/Kommune.png RENAMED
File without changes
Kommune_1.webp β†’ icons/Kommune_1.webp RENAMED
File without changes
Kommuneity.png β†’ icons/Kommuneity.png RENAMED
File without changes
icons/Wedges.svg ADDED
balls.svg β†’ icons/balls.svg RENAMED
File without changes
bars_loader.svg β†’ icons/bars_loader.svg RENAMED
File without changes
icons/brush.gif ADDED
icons/db_loader.svg ADDED
man.png β†’ icons/man.png RENAMED
File without changes
ripple.svg β†’ icons/ripple.svg RENAMED
File without changes
requirements.txt CHANGED
@@ -1,4 +1,6 @@
1
  python-dotenv
2
  groq
3
  transformers
4
- gradio_client
 
 
 
1
  python-dotenv
2
  groq
3
  transformers
4
+ gradio_client
5
+ anthropic
6
+ supabase
tools/webScraper.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from urllib.parse import parse_qs, urlparse
2
+ from bs4 import BeautifulSoup
3
+ import requests
4
+
5
+
6
+ def scrapeGoogleSearch(query):
7
+ result = []
8
+
9
+ searchUrl = f"https://www.google.com/search?q={query}"
10
+ response = requests.get(searchUrl)
11
+ if response.status_code == 200:
12
+ soup = BeautifulSoup(response.text, 'html.parser')
13
+ with open('soupDump.html', 'w', encoding='utf-8') as file:
14
+ file.write(soup.prettify())
15
+
16
+ results = soup.find('body')
17
+ mainDiv = soup.find('div', attrs={'id': 'main'})
18
+ answerDiv = (
19
+ mainDiv.select_one('div.PqksIc')
20
+ or mainDiv.select_one('div.BNeawe.iBp4i')
21
+ )
22
+ if answerDiv:
23
+ citationDateDiv = answerDiv.select_one('sub.gMUaMb.r0bn4c.rQMQod')
24
+ citationDate = citationDateDiv.text if citationDateDiv else ""
25
+ answerText = answerDiv.text.replace(citationDate, '').strip()
26
+ citationText = f"Citation Date: {citationDate}" if citationDate else ""
27
+ result.append(f"====\n{answerText}\n{citationText}\n====\n\n")
28
+
29
+ results = mainDiv.select('div.egMi0.kCrYT')
30
+ resultsDesc = mainDiv.select('div.BNeawe.s3v9rd.AP7Wnd .BNeawe.s3v9rd.AP7Wnd:last-child')
31
+
32
+ for (i, result) in enumerate(results[:10]):
33
+ title = result.find('h3').text
34
+ link = result.find('a')['href']
35
+ parsedUrl = urlparse(link)
36
+ urlParams = parse_qs(parsedUrl.query)
37
+ link = urlParams.get('q', [None])[0]
38
+ desc = resultsDesc[i].text
39
+ result.append(f"Title: {title}")
40
+ result.append(f"Description: {desc}")
41
+ result.append(f"Link: {link}\n")
42
+ else:
43
+ print("Failed to retrieve search results.")
44
+
45
+ return "".join(result)