Revisions of wasmedge

buildservice-autocommit accepted request 1187712 from Dirk Mueller's avatar Dirk Mueller (dirkmueller) (revision 7)
baserev update by copy to link target
Dirk Mueller's avatar Dirk Mueller (dirkmueller) accepted request 1187697 from Jan Engelhardt's avatar Jan Engelhardt (jengelh) (revision 6)
- Add fmt11.patch to resolve FTBFS
buildservice-autocommit accepted request 1132089 from Alexandre Vicenzi's avatar Alexandre Vicenzi (avicenzi) (revision 5)
baserev update by copy to link target
Alexandre Vicenzi's avatar Alexandre Vicenzi (avicenzi) accepted request 1128890 from Dirk Mueller's avatar Dirk Mueller (dirkmueller) (revision 4)
- update to 0.13.5:
  * [Component] share loading entry for component and module
    (#2945)
  * Initial support for the component model proposal.
  * This PR allows WasmEdge to recognize the component and module
    format.
  * Provide options for enabling OpenBLAS, Metal, and cuBLAS.
  * Bump llama.cpp to b1383
  * Build thirdparty/ggml only when the ggml backend is enabled.
  * Enable the ggml plugin on the macOS platform.
  * Introduce `AUTO` detection. Wasm application will no longer
    need to specify the hardware spec (e.g., CPU or GPU). It will
    auto-detect by the runtime.
  * Unified the preload options with case-insensitive matching
  * Introduce `metadata` for setting the ggml options.
  * The following options are supported:
  * `enable-log`: `true` to enable logging. (default: `false`)
  * `stream-stdout`: `true` to print the inferred tokens in the
    streaming mode to standard output. (default: `false`)
  * `ctx-size`: Set the context size the same as the `--ctx-size`
    parameter in llama.cpp. (default: `512`)
  * `n-predict`: Set the number of tokens to predict, the same as
    the `--n-predict` parameter in llama.cpp. (default: `512`)
  * `n-gpu-layers`: Set the number of layers to store in VRAM,
    the same as the `--n-gpu-layers` parameter in llama.cpp.
    (default: `0`)
  * `reverse-prompt`: Set the token pattern at which you want to
    halt the generation. Similar to the `--reverse-prompt`
    parameter in llama.cpp. (default: `""`)
  * `batch-size`: Set the number of batch sizes for prompt
Ana Guerrero's avatar Ana Guerrero (anag+factory) accepted request 1105474 from Alexandre Vicenzi's avatar Alexandre Vicenzi (avicenzi) (revision 3)
initialized devel package after accepting 1105474
Avindra Goolcharan's avatar Avindra Goolcharan (avindra) accepted request 1105212 from Alexandre Vicenzi's avatar Alexandre Vicenzi (avicenzi) (revision 1)
Add WasmEdge
Displaying all 7 revisions
openSUSE Build Service is sponsored by