LLMClientAskAsync Method

Asynchronous streaming LLM call via WebSocket. Calls onChunk for each received text fragment. Returns the complete concatenated response when the stream ends. Uses the server's currently active backend and model by default.

Definition

Namespace: DWSIM.SharedClasses
Assembly: DWSIM.SharedClasses (in DWSIM.SharedClasses.dll) Version: 10.0.0.0
public Task<string> AskAsync(
	string prompt,
	string model = "",
	string lang = "",
	Action<string> onChunk = null,
	CancellationToken ct = default
)

Parameters

prompt  String
 
model  String  (Optional)
 
lang  String  (Optional)
 
onChunk  ActionString  (Optional)
 
ct  CancellationToken  (Optional)
 

Return Value

TaskString

See Also