Update `teleprompt` documentationdarinkishore/dspy#46
![Logo of Sweep](/_next/image?url=%2Flogo.png&w=64&q=75)
Update `teleprompt` documentation
darinkishore/dspy#46
> > >
✓ Completed in 9 minutes, 6 months ago using GPT-4 • Book a call • Report a bug
Progress
ModifyChanged
docs/teleprompters/teleprompters.md:2-282
Changed docs/teleprompters/teleprompters.md
in e344da5
18 | 18 | ||
19 | ### Constructor | 19 | ### Constructor |
20 | 20 | ||
21 | The constructor initializes the `LabeledFewShot` class and sets up its attributes, particularly defining `k` number of samples to be used by the predictor. | 21 | The constructor initializes the `LabeledFewShot` class with the specified number of demos `k` to be used for each predictor. If `sample` is `True`, this number of demos will be chosen randomly from the `trainset`. Otherwise, the first `k` demos from the `trainset` will be selected. to be used by the predictor. |
22 | 22 | ||
23 | ```python | 23 | ```python |
24 | class LabeledFewShot(Teleprompter): | 24 | class LabeledFewShot(Teleprompter): |
... | |||
33 | 33 | ||
34 | #### `compile(self, student, *, trainset)` | 34 | #### `compile(self, student, *, trainset)` |
35 | 35 | ||
36 | This method compiles the `LabeledFewShot` instance by configuring the `student` predictor. It assigns subsets of the `trainset` in each student's predictor's `demos` attribute. If the `trainset` is empty, the method returns the original `student`. | 36 | This method compiles the `LabeledFewShot` instance by preparing the `student` module with demo samples from the `trainset` for each of the student's predictors. It decides whether to sample randomly from the training demoes or to take the first `k` demoes based on the `sample` parameter. `k` denotes the limit on the number of demoes to use, which was set during the construction of the `LabeledFewShot` instance. It assigns subsets of the `trainset` in each student's predictor's `demos` attribute. If the `trainset` is empty, the method returns the original `student`. |
37 | 37 | ||
38 | **Parameters:** | 38 | **Parameters:** |
39 | - `student` (_Teleprompter_): Student predictor to be compiled. | 39 | - `student` (_Teleprompter_): Student predictor to be compiled. |
40 | - `trainset` (_list_): Training dataset for compiling with student predictor. | 40 | - `trainset` (_list_): A list of example objects to be used as training demos. |
41 | - `sample` (_bool_, optional): Determines if the demos should be randomly sampled from the `trainset`. Defaults to `True`. | ||
41 | 42 | ||
42 | **Returns:** | 43 | **Returns:** |
43 | - The compiled `student` predictor with assigned training samples for each predictor or the original `student` if the `trainset` is empty. | 44 | - The compiled `student` predictor with assigned training samples for each predictor or the original `student` if the `trainset` is empty. |
... | |||
121 | #Assume defined RAG class | 122 | #Assume defined RAG class |
122 | ... | 123 | ... |
123 | 124 | ||
124 | #Define teleprompter and include teacher | 125 | |
125 | teacher = dspy.OpenAI(model='gpt-3.5-turbo', api_key = openai.api_key, api_provider = "openai", model_type = "chat") | 126 | teacher = dspy.OpenAI(model='gpt-3.5-turbo', api_key = openai.api_key, api_provider = "openai", model_type = "chat") |
126 | teleprompter = BootstrapFewShot(teacher_settings=dict({'lm': teacher})) | ||
127 | 127 | ||
128 | |||
128 | # Compile! | 129 | # Compile! |
129 | compiled_rag = teleprompter.compile(student=RAG(), trainset=trainset) | 130 | compiled_rag = teleprompter.compile(student=RAG(), trainset=trainset) |
130 | ``` | 131 | ``` |
... | |||
141 | ``` | 142 | ``` |
142 | 143 | ||
143 | **Parameters:** | 144 | **Parameters:** |
144 | - `reduce_fn` (_callable_, _optional_): Function used to reduce multiple outputs from different programs into a single output. A common choice is `dspy.majority`. Defaults to `None`. | ||
145 | - `size` (_int_, _optional_): Number of programs to randomly select for ensembling. If not specified, all programs will be used. Defaults to `None`. | 145 | - `reduce_fn` (_callable_, _optional_): Function used to reduce multiple outputs from different programs into a single output. A common choice is `dspy.majority`. If set to `None`, all sampled outputs will be returned as a list. Defaults to `None`. |
146 | - `size` (_int_, _optional_): Number of programs to randomly select for ensembling if not all programs are to be used for reduction. If not specified, all programs will be used. Defaults to `None`. | ||
146 | - `deterministic` (_bool_, _optional_): Specifies whether ensemble should operate deterministically. Currently, setting this to `True` will raise an error as this feature is pending implementation. Defaults to `False`. | 147 | - `deterministic` (_bool_, _optional_): Specifies whether ensemble should operate deterministically. Currently, setting this to `True` will raise an error as this feature is pending implementation. Defaults to `False`. |
147 | 148 | ||
148 | ### Method | 149 | ### Method |
... | |||
180 | The constructor initializes the `BootstrapFewShotWithRandomSearch` class and sets up its attributes. It inherits from the `BootstrapFewShot` class and introduces additional attributes for the random search process. | 181 | The constructor initializes the `BootstrapFewShotWithRandomSearch` class and sets up its attributes. It inherits from the `BootstrapFewShot` class and introduces additional attributes for the random search process. |
181 | 182 | ||
182 | ```python | 183 | ```python |
183 | class BootstrapFewShotWithRandomSearch(BootstrapFewShot): | 184 | class BootstrapFewShotWithRandomSearch(LabeledFewShot): |
184 | def __init__(self, metric, teacher_settings={}, max_bootstrapped_demos=4, max_labeled_demos=16, max_rounds=1, num_candidate_programs=16, num_threads=6): | 185 | def __init__(self, metric, teacher_settings={}, max_bootstrapped_demos=4, max_labeled_demos=16, max_rounds=1, num_candidate_programs=16, num_threads=6): |
185 | self.metric = metric | 186 | self.metric = metric |
186 | self.teacher_settings = teacher_settings | 187 | self.teacher_settings = teacher_settings |
... | |||
272 | 273 | ||
273 | ```python | 274 | ```python |
274 | #Assume defined trainset | 275 | #Assume defined trainset |
275 | #Assume defined RAG class | 276 | # Assume the RAG class is already defined as shown earlier |
276 | ... | 277 | ... |
277 | 278 | ||
278 | #Define teleprompter | 279 | #Define teleprompter |
- Review the code for each class in the
/dspy/teleprompt/*
directory. - Update the description of each class in the
teleprompters.md
file to accurately reflect the current functionality of the class. This includes theLabeledFewShot
,BootstrapFewShot
,Ensemble
,BootstrapFewShotWithRandomSearch
, andBootstrapFinetune
classes. - Update the description of the constructor for each class. This includes the purpose of the constructor and the parameters it accepts.
- Update the description of the methods for each class. This includes the purpose of the method, the parameters it accepts, and what it returns.
- Update the examples for each class to ensure they accurately demonstrate how to use the class and its methods.
- Ensure that the documentation is clear, concise, and easy to understand.
Modified file with Assistant API
Run GitHub Actions for
docs/teleprompters/teleprompters.md
Ran GitHub Actions for e344da508577d549d00d057789e8d1b41cd649eb:
Plan
This is based on the results of the Planning step. The plan may expand from failed GitHub Actions runs.
Run GitHub Actions for
docs/teleprompters/teleprompters.md
Code Snippets Found
This is based on the results of the Searching step.
docs/teleprompters/teleprompters.md:2-282
2
3Teleprompters are powerful optimizers (included in DSPy) that can learn to bootstrap and select effective prompts for the modules of any program. (The "tele-" in the name means "at a distance", i.e., automatic prompting at a distance.)
4
5This documentation provides an overview of the DSPy Teleprompters.
6
7## Teleprompters
8
9| Module | Jump To |
10| --- | --- |
11| LabeledFewShot | [LabeledFewShot Section](#telepromptlabeledfewshot) |
12| BootstrapFewShot | [BootstrapFewShot Section](#telepromptbootstrapfewshot) |
13| Ensemble | [Ensemble Section](#telepromptensemble) |
14| BootstrapFewShotWithRandomSearch | [BootstrapFewShotWithRandomSearch Section](#telepromptbootstrapfewshotwithrandomsearch) |
15| BootstrapFinetune | [BootstrapFinetune Section](#telepromptbootstrapfinetune) |
16
17## teleprompt.LabeledFewShot
18
19### Constructor
20
21The constructor initializes the `LabeledFewShot` class and sets up its attributes, particularly defining `k` number of samples to be used by the predictor.
22
23```python
24class LabeledFewShot(Teleprompter):
25 def __init__(self, k=16):
26 self.k = k
27```
28
29**Parameters:**
30- `k` (_int_): Number of samples to be used for each predictor. Defaults to 16.
31
32### Method
33
34#### `compile(self, student, *, trainset)`
35
36This method compiles the `LabeledFewShot` instance by configuring the `student` predictor. It assigns subsets of the `trainset` in each student's predictor's `demos` attribute. If the `trainset` is empty, the method returns the original `student`.
37
38**Parameters:**
39- `student` (_Teleprompter_): Student predictor to be compiled.
40- `trainset` (_list_): Training dataset for compiling with student predictor.
41
42**Returns:**
43- The compiled `student` predictor with assigned training samples for each predictor or the original `student` if the `trainset` is empty.
44
45### Example
46
47```python
48import dspy
49
50#Assume defined trainset
51class RAG(dspy.Module):
52 def __init__(self, num_passages=3):
53 super().__init__()
54
55 #declare retrieval and predictor modules
56 self.retrieve = dspy.Retrieve(k=num_passages)
57 self.generate_answer = dspy.ChainOfThought(GenerateAnswer)
58
59 #flow for answering questions using predictor and retrieval modules
60 def forward(self, question):
61 context = self.retrieve(question).passages
62 prediction = self.generate_answer(context=context, question=question)
63 return dspy.Prediction(context=context, answer=prediction.answer)
64
65#Define teleprompter
66teleprompter = LabeledFewShot()
67
68# Compile!
69compiled_rag = teleprompter.compile(student=RAG(), trainset=trainset)
70```
71
72## teleprompt.BootstrapFewShot
73
74### Constructor
75
76The constructor initializes the `BootstrapFewShot` class and sets up parameters for bootstrapping.
77
78```python
79class BootstrapFewShot(Teleprompter):
80 def __init__(self, metric=None, teacher_settings={}, max_bootstrapped_demos=4, max_labeled_demos=16, max_rounds=1):
81 self.metric = metric
82 self.teacher_settings = teacher_settings
83
84 self.max_bootstrapped_demos = max_bootstrapped_demos
85 self.max_labeled_demos = max_labeled_demos
86 self.max_rounds = max_rounds
87```
88
89**Parameters:**
90- `metric` (_callable_, _optional_): Metric function to evaluate examples during bootstrapping. Defaults to `None`.
91- `teacher_settings` (_dict_, _optional_): Settings for teacher predictor. Defaults to empty dictionary.
92- `max_bootstrapped_demos` (_int_, _optional_): Maximum number of bootstrapped demonstrations per predictor. Defaults to 4.
93- `max_labeled_demos` (_int_, _optional_): Maximum number of labeled demonstrations per predictor. Defaults to 16.
94- `max_rounds` (_int_, _optional_): Maximum number of bootstrapping rounds. Defaults to 1.
95
96### Method
97
98#### `compile(self, student, *, teacher=None, trainset, valset=None)`
99
100This method compiles the BootstrapFewShot instance by performing bootstrapping to refine the student predictor.
101
102This process includes preparing the student and teacher predictors, which involves creating predictor copies, verifying the student predictor is uncompiled, and compiling the teacher predictor with labeled demonstrations via LabeledFewShot if the teacher predictor hasn't been compiled.
103
104The next stage involves preparing predictor mappings by validating that both the student and teacher predictors have the same program structure and the same signatures but are different objects.
105
106The final stage is performing the bootstrapping iterations.
107
108**Parameters:**
109- `student` (_Teleprompter_): Student predictor to be compiled.
110- `teacher` (_Teleprompter_, _optional_): Teacher predictor used for bootstrapping. Defaults to `None`.
111- `trainset` (_list_): Training dataset used in bootstrapping.
112- `valset` (_list_, _optional_): Validation dataset used in compilation. Defaults to `None`.
113
114**Returns:**
115- The compiled `student` predictor after bootstrapping with refined demonstrations.
116
117### Example
118
119```python
120#Assume defined trainset
121#Assume defined RAG class
122...
123
124#Define teleprompter and include teacher
125teacher = dspy.OpenAI(model='gpt-3.5-turbo', api_key = openai.api_key, api_provider = "openai", model_type = "chat")
126teleprompter = BootstrapFewShot(teacher_settings=dict({'lm': teacher}))
127
128# Compile!
129compiled_rag = teleprompter.compile(student=RAG(), trainset=trainset)
130```
131
132## teleprompt.Ensemble
133
134### Constructor
135
136The constructor initializes the `Ensemble` class and sets up its attributes. This teleprompter is designed to create ensembled versions of multiple programs, reducing various outputs from different programs into a single output.
137
138```python
139class Ensemble(Teleprompter):
140 def __init__(self, *, reduce_fn=None, size=None, deterministic=False):
141```
142
143**Parameters:**
144- `reduce_fn` (_callable_, _optional_): Function used to reduce multiple outputs from different programs into a single output. A common choice is `dspy.majority`. Defaults to `None`.
145- `size` (_int_, _optional_): Number of programs to randomly select for ensembling. If not specified, all programs will be used. Defaults to `None`.
146- `deterministic` (_bool_, _optional_): Specifies whether ensemble should operate deterministically. Currently, setting this to `True` will raise an error as this feature is pending implementation. Defaults to `False`.
147
148### Method
149
150#### `compile(self, programs)`
151
152This method compiles an ensemble of programs into a single program that when run, can either randomly sample a subset of the given programs to produce outputs or use all of them. The multiple outputs can then be reduced into a single output using the `reduce_fn`.
153
154**Parameters:**
155- `programs` (_list_): List of programs to be ensembled.
156
157**Returns:**
158- `EnsembledProgram` (_Module_): An ensembled version of the input programs.
159
160### Example
161
162```python
163import dspy
164from dspy.teleprompt import Ensemble
165
166# Assume a list of programs
167programs = [program1, program2, program3, ...]
168
169# Define Ensemble teleprompter
170teleprompter = Ensemble(reduce_fn=dspy.majority, size=2)
171
172# Compile to get the EnsembledProgram
173ensembled_program = teleprompter.compile(programs)
174```
175
176## teleprompt.BootstrapFewShotWithRandomSearch
177
178### Constructor
179
180The constructor initializes the `BootstrapFewShotWithRandomSearch` class and sets up its attributes. It inherits from the `BootstrapFewShot` class and introduces additional attributes for the random search process.
181
182```python
183class BootstrapFewShotWithRandomSearch(BootstrapFewShot):
184 def __init__(self, metric, teacher_settings={}, max_bootstrapped_demos=4, max_labeled_demos=16, max_rounds=1, num_candidate_programs=16, num_threads=6):
185 self.metric = metric
186 self.teacher_settings = teacher_settings
187 self.max_rounds = max_rounds
188
189 self.num_threads = num_threads
190
191 self.min_num_samples = 1
192 self.max_num_samples = max_bootstrapped_demos
193 self.num_candidate_sets = num_candidate_programs
194 self.max_num_traces = 1 + int(max_bootstrapped_demos / 2.0 * self.num_candidate_sets)
195
196 self.max_bootstrapped_demos = self.max_num_traces
197 self.max_labeled_demos = max_labeled_demos
198
199 print("Going to sample between", self.min_num_samples, "and", self.max_num_samples, "traces per predictor.")
200 print("Going to sample", self.max_num_traces, "traces in total.")
201 print("Will attempt to train", self.num_candidate_sets, "candidate sets.")
202```
203
204**Parameters:**
205- `metric` (_callable_, _optional_): Metric function to evaluate examples during bootstrapping. Defaults to `None`.
206- `teacher_settings` (_dict_, _optional_): Settings for teacher predictor. Defaults to empty dictionary.
207- `max_bootstrapped_demos` (_int_, _optional_): Maximum number of bootstrapped demonstrations per predictor. Defaults to 4.
208- `max_labeled_demos` (_int_, _optional_): Maximum number of labeled demonstrations per predictor. Defaults to 16.
209- `max_rounds` (_int_, _optional_): Maximum number of bootstrapping rounds. Defaults to 1.
210- `num_candidate_programs` (_int_): Number of candidate programs to generate during random search.
211- `num_threads` (_int_): Number of threads used for evaluation during random search.
212
213### Method
214
215Refer to [teleprompt.BootstrapFewShot](#telepromptbootstrapfewshot) documentation.
216
217## Example
218
219```python
220#Assume defined trainset
221#Assume defined RAG class
222...
223
224#Define teleprompter and include teacher
225teacher = dspy.OpenAI(model='gpt-3.5-turbo', api_key = openai.api_key, api_provider = "openai", model_type = "chat")
226teleprompter = BootstrapFewShotWithRandomSearch(teacher_settings=dict({'lm': teacher}))
227
228# Compile!
229compiled_rag = teleprompter.compile(student=RAG(), trainset=trainset)
230```
231
232## teleprompt.BootstrapFinetune
233
234### Constructor
235
236### `__init__(self, metric=None, teacher_settings={}, multitask=True)`
237
238The constructor initializes a `BootstrapFinetune` instance and sets up its attributes. It defines the teleprompter as a `BootstrapFewShot` instance for the finetuning compilation.
239
240```python
241class BootstrapFinetune(Teleprompter):
242 def __init__(self, metric=None, teacher_settings={}, multitask=True):
243```
244
245**Parameters:**
246- `metric` (_callable_, _optional_): Metric function to evaluate examples during bootstrapping. Defaults to `None`.
247- `teacher_settings` (_dict_, _optional_): Settings for teacher predictor. Defaults to empty dictionary.
248- `multitask` (_bool_, _optional_): Enable multitask fine-tuning. Defaults to `True`.
249
250### Method
251
252#### `compile(self, student, *, teacher=None, trainset, valset=None, target='t5-large', bsize=12, accumsteps=1, lr=5e-5, epochs=1, bf16=False)`
253
254This method first compiles for bootstrapping with the `BootstrapFewShot` teleprompter. It then prepares fine-tuning data by generating prompt-completion pairs for training and performs finetuning. After compilation, the LMs are set to the finetuned models and the method returns a compiled and fine-tuned predictor.
255
256**Parameters:**
257- `student` (_Predict_): Student predictor to be fine-tuned.
258- `teacher` (_Predict_, _optional_): Teacher predictor to help with fine-tuning. Defaults to `None`.
259- `trainset` (_list_): Training dataset for fine-tuning.
260- `valset` (_list_, _optional_): Validation dataset for fine-tuning. Defaults to `None`.
261- `target` (_str_, _optional_): Target model for fine-tuning. Defaults to `'t5-large'`.
262- `bsize` (_int_, _optional_): Batch size for training. Defaults to `12`.
263- `accumsteps` (_int_, _optional_): Gradient accumulation steps. Defaults to `1`.
264- `lr` (_float_, _optional_): Learning rate for fine-tuning. Defaults to `5e-5`.
265- `epochs` (_int_, _optional_): Number of training epochs. Defaults to `1`.
266- `bf16` (_bool_, _optional_): Enable mixed-precision training with BF16. Defaults to `False`.
267
268**Returns:**
269- `compiled2` (_Predict_): A compiled and fine-tuned `Predict` instance.
270
271### Example
272
273```python
274#Assume defined trainset
275#Assume defined RAG class
276...
277
278#Define teleprompter
279teleprompter = BootstrapFinetune(teacher_settings=dict({'lm': teacher}))
280
281# Compile!
282compiled_rag = teleprompter.compile(student=RAG(), trainset=trainset, target='google/flan-t5-base')
dspy/teleprompt/vanilla.py:6-27
6
7class LabeledFewShot(Teleprompter):
8 def __init__(self, k=16):
9 self.k = k
10
11 def compile(self, student, *, trainset, sample=True):
12 self.student = student.reset_copy()
13 self.trainset = trainset
14
15 if len(self.trainset) == 0:
16 return self.student
17
18 rng = random.Random(0)
19
20 for predictor in self.student.predictors():
21 if sample:
22 predictor.demos = rng.sample(self.trainset, min(self.k, len(self.trainset)))
23 else:
24 predictor.demos = self.trainset[:min(self.k, len(self.trainset))]
25
26 return self.student
27
dspy/teleprompt/finetune.py:47-166
47
48class BootstrapFinetune(Teleprompter):
49 def __init__(self, metric=None, teacher_settings={}, multitask=True):
50 self.metric = metric
51 self.teacher_settings = teacher_settings
52 self.multitask = multitask
53
54 metric = metric or (lambda *args: True)
55 self.teleprompter = BootstrapFewShot(metric=metric,
56 max_bootstrapped_demos=999999,
57 max_labeled_demos=0, # FIXME: TODO: Make this zero? or param, with default as 16 or 0?
58 teacher_settings=teacher_settings)
59
60
61 def compile(self, student, *, teacher=None, trainset, valset=None,
62 target='t5-large', bsize=12, accumsteps=1, lr=5e-5, epochs=1, bf16=False, int8=False, peft=False, path_prefix=None):
63
64 # It's usually better to supply a few-shot teacher, rather than uncompiled module (the student).
65 if teacher is None:
66 print("WARNING: Using a vanilla teacher. "
67 "Are you sure you want to use BootstrapFinetune without a compiled teacher?")
68
69
70 teachers = teacher if isinstance(teacher, list) else [teacher]
71 finetune_data = {}
72
73 for teacher in teachers:
74 # Dummy compilation to get bootstraps.
75 compiled = self.teleprompter.compile(student, teacher=teacher, trainset=trainset)
76 multitask = self.multitask
77
78 # Prepare finetune <prompt, completion> pairs.
79 for name, predictor in compiled.named_predictors():
80 name_ = 'all' if multitask else name
81 finetune_data[name_] = [] if name_ not in finetune_data else finetune_data[name_]
82
83 for demo in predictor.demos:
84 demo = dict(demo)
85
86 # TODO: FIXME: generalize.
87 completion = demo.pop(predictor.signature.fields[-1].output_variable)
88 prompt = predictor.signature.query(dsp.Example(demos=[], **demo)).strip()
89
90 finetune_data[name_].append(dict(prompt=prompt, completion=completion))
91
92 for name_ in finetune_data:
93 random.Random(0).shuffle(finetune_data[name_])
94 print(name_, len(finetune_data[name_]))
95
96
97 #
98 # Dump as files.
99 #
100 finetune_paths = {}
101
102 for name in finetune_data:
103 data = finetune_data[name]
104 hashed_name = name + '.' + Hasher.hash(data)
105 output_path = os.path.join(training_data_directory, f'{hashed_name}.jsonl')
106 print(output_path)
107
108 with open(output_path, 'w') as f:
109 for line in data:
110 f.write(ujson.dumps(line) + '\n')
111
112 finetune_paths[name] = output_path
113
114
115 #
116 # Train!
117 #
118 import string
119 compiler_config = {
120 'save': ''.join(random.Random(time.time()).choices(string.ascii_uppercase + string.digits, k=13)), # https://stackoverflow.com/a/2257449/1493011
121 'peft': peft,
122 'fp16': False,
123 'bf16': bf16,
124 'int8': int8,
125 'fid': False,
126 'rationale': False,
127 'batch_size': bsize,
128 'epochs': epochs,
129 'gradient_accumulation_steps': accumsteps, # 2,
130 'lr': lr
131 }
132
133 compiler_config['save'] = os.path.join(path_prefix, compiler_config['save']) if path_prefix else compiler_config['save']
134
135 from dsp.modules.finetuning import finetune_hf
136
137 target = target
138 finetune_models = {}
139
140 for name in finetune_data:
141 training_data_path = finetune_paths[name]
142 compiler_config_ = dict(compiler_config)
143 compiler_config_['save'] = compiler_config['save'] + '.' + name
144 best_ckpt_path = finetune_hf(training_data_path, target, compiler_config_)
145
146 print(f"#> Best checkpoint path: {best_ckpt_path} for {name}")
147 finetune_models[name] = dsp.HFModel(model=target, checkpoint=best_ckpt_path) # best_ckpt_path
148
149 #
150 # Set the LMs to the finetuned ones, per module
151 #
152 compiled2 = compiled.reset_copy()
153
154 assert len(compiled.named_predictors()) == len(compiled2.named_predictors())
155
156 for (name, predictor), (name2, predictor2) in zip(compiled.named_predictors(), compiled2.named_predictors()):
157 assert name == name2
158 name = 'all' if multitask else name
159
160 # TODO: FIXME: When we assign .lm, the Predict.forward will also set only_query=True.
161 # This is correct for here but we may want to make it more explicitly restricted to finetuned models.
162 print(f"Assigning the LM of predictor {name}.")
163
164 predictor2.lm = finetune_models[name]
165 assert predictor2.demos == []
166
dspy/teleprompt/ensemble.py:10-39
10
11class Ensemble(Teleprompter):
12 def __init__(self, *, reduce_fn=None, size=None, deterministic=False):
13 """A common reduce_fn is dspy.majority."""
14
15 assert deterministic is False, "TODO: Implement example hashing for deterministic ensemble."
16
17 self.reduce_fn = reduce_fn
18 self.size = size
19 self.deterministic = deterministic
20
21 def compile(self, programs):
22 size = self.size
23 reduce_fn = self.reduce_fn
24
25 import dspy
26 class EnsembledProgram(dspy.Module):
27 def __init__(self):
28 super().__init__()
29 self.programs = programs
30
31 def forward(self, *args, **kwargs):
32 programs = random.sample(self.programs, size) if size else self.programs
33 outputs = [prog(*args, **kwargs) for prog in programs]
34
35 if reduce_fn:
36 return reduce_fn(outputs)
37
38 return outputs
39