Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
Please login to access the resource
openSUSE:Factory
wf-recorder
wf-recorder-0.5.0+git1.obscpio
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File wf-recorder-0.5.0+git1.obscpio of Package wf-recorder
07070103197D96000041ED0000000000000000000000016705807700000000000000000000003400000000000000000000001F00000000wf-recorder-0.5.0+git1/.github07070103197D97000041ED0000000000000000000000016705807700000000000000000000003400000000000000000000002900000000wf-recorder-0.5.0+git1/.github/workflows07070103197D98000081A40000000000000000000000016705807700000495000000000000003400000000000000000000003400000000wf-recorder-0.5.0+git1/.github/workflows/build.yamlname: Build on: [push, pull_request] jobs: linux: runs-on: ubuntu-latest container: registry.fedoraproject.org/fedora:latest steps: - name: Set up DNF download cache id: dnf-cache uses: actions/cache@v3 with: path: /var/cache/dnf key: ${{ runner.os }}-dnfcache - name: Install pre-requisites run: dnf --assumeyes --setopt=install_weak_deps=False install gcc-c++ meson /usr/bin/git /usr/bin/wayland-scanner 'pkgconfig(wayland-client)' 'pkgconfig(wayland-protocols)' 'pkgconfig(libpulse-simple)' 'pkgconfig(libavutil)' 'pkgconfig(libavcodec)' 'pkgconfig(libavformat)' 'pkgconfig(libavdevice)' 'pkgconfig(libavfilter)' 'pkgconfig(libswresample)' 'pkgconfig(gbm)' 'pkgconfig(libdrm)' 'pkgconfig(libpipewire-0.3)' - uses: actions/checkout@v2 with: fetch-depth: 0 # Shallow clones speed things up - run: git config --global --add safe.directory '*' # Needed for git rev-parse - name: meson configure run: meson ./Build - name: compile with ninja run: ninja -C ./Build 07070103197D99000081A40000000000000000000000016705807700000438000000000000003400000000000000000000001F00000000wf-recorder-0.5.0+git1/LICENSEThe MIT License (MIT) Copyright (c) 2019 Ilia Bozhinov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 07070103197D9A000081A4000000000000000000000001670580770000101D000000000000003400000000000000000000002100000000wf-recorder-0.5.0+git1/README.md# wf-recorder wf-recorder is a utility program for screen recording of `wlroots`-based compositors (more specifically, those that support `wlr-screencopy-v1` and `xdg-output`). Its dependencies are `ffmpeg`, `wayland-client` and `wayland-protocols`. # installation [comment]: <> (List ordered alphabetically) ## Alpine Linux wf-recorder is available in the community repositories: ``` apk add wf-recorder ``` ## Arch Linux Arch users can install wf-recorder from the Community repo. ``` pacman -S wf-recorder ``` ## Artix Linux Artix users can install wf-recorder from the official repos ``` pacman -S wf-recorder ``` ## Debian GNU/Linux Debian users can install wf-recorder from official repos ``` apt install wf-recorder ``` ## Fedora Linux Fedora users can install wf-recorder from the official repos ``` sudo dnf install wf-recorder ``` ## Gentoo Linux Gentoo users can install wf-recorder from the official (`::gentoo`) repository. ## NixOS / Nix Users of the Nix package manager can add the `wf-recorder` package to their system configurations, or use `nix-shell` / `nix shell` / `nix run`: ``` nix-shell -p wf-recorder # OR nix shell nixpkgs#wf-recorder # OR nix run nixpkgs#wf-recorder ``` ## Void Linux Void users can install wf-recorder from the official repos ``` xbps-install -S wf-recorder ``` ## From Source ### Install Dependencies #### Ubuntu ``` sudo apt install g++ meson libavutil-dev libavcodec-dev libavformat-dev libswscale-dev libpulse-dev ``` #### Fedora ``` $ sudo dnf install gcc-c++ meson wayland-devel wayland-protocols-devel ffmpeg-free-devel pulseaudio-libs-devel ``` ### Download & Build ``` git clone https://github.com/ammen99/wf-recorder.git && cd wf-recorder meson build --prefix=/usr --buildtype=release ninja -C build ``` Optionally configure with `-Ddefault_codec='codec'`. The default is libx264. Now you can just run `./build/wf-recorder` or install it with `sudo ninja -C build install`. The man page can be read with `man ./manpage/wf-recorder.1`. # Usage In its simplest form, run `wf-recorder` to start recording and use Ctrl+C to stop. This will create a file called `recording.mp4` in the current working directory using the default codec. Use `-f <filename>` to specify the output file. In case of multiple outputs, you'll first be prompted to select the output you want to record. If you know the output name beforehand, you can use the `-o <output name>` option. To select a specific part of the screen you can either use `-g <geometry>`, or use [slurp](https://github.com/emersion/slurp) for interactive selection of the screen area that will be recorded: ``` wf-recorder -g "$(slurp)" ``` You can record screen and sound simultaneously with ``` wf-recorder --audio --file=recording_with_audio.mp4 ``` To specify an audio device, use the `-a<device>` or `--audio=<device>` options. To specify a video codec, use the `-c <codec>` option. To modify codec parameters, use `-p <option_name>=<option_value>`. You can also specify an audio codec, using `-C <codec>`. Alternatively, the long form `--audio-codec` can be used. You can use the following command to check all available video codecs ``` ffmpeg -hide_banner -encoders | grep -E '^ V' | grep -F '(codec' | cut -c 8- | sort ``` and the following for audio codecs ``` ffmpeg -hide_banner -encoders | grep -E '^ A' | grep -F '(codec' | cut -c 8- | sort ``` Use ffmpeg to get details about specific encoder, filter or muxer. To set a specific output format, use the `--muxer` option. For example, to output to a video4linux2 loopback you might use: ``` wf-recorder --muxer=v4l2 --codec=rawvideo --file=/dev/video2 ``` To use GPU encoding, use a VAAPI codec (for ex. `h264_vaapi`) and specify a GPU device to use with the `-d` option: ``` wf-recorder -f test-vaapi.mkv -c h264_vaapi -d /dev/dri/renderD128 ``` Some drivers report support for rgb0 data for vaapi input but really only support yuv planar formats. In this case, use the `-x yuv420p` or `--pixel-format yuv420p` option in addition to the vaapi options to convert the data to yuv planar data before sending it to the GPU. 07070103197D9B000081A4000000000000000000000001670580770000020C000000000000003400000000000000000000002300000000wf-recorder-0.5.0+git1/config.h.in#pragma once #define DEFAULT_CODEC "@default_codec@" #define DEFAULT_PIX_FMT "@default_pix_fmt@" #define DEFAULT_AUDIO_BACKEND "@default_audio_backend@" #define DEFAULT_AUDIO_CODEC "@default_audio_codec@" #define DEFAULT_AUDIO_SAMPLE_RATE @default_audio_sample_rate@ #define DEFAULT_CONTAINER_FORMAT "@default_container_format@" #define FALLBACK_AUDIO_SAMPLE_FMT "@fallback_audio_sample_fmt@" #mesondefine HAVE_AUDIO #mesondefine HAVE_PULSE #mesondefine HAVE_PIPEWIRE #mesondefine HAVE_OPENCL #mesondefine HAVE_LIBAVDEVICE 07070103197D9C000041ED0000000000000000000000016705807700000000000000000000003400000000000000000000001F00000000wf-recorder-0.5.0+git1/manpage07070103197D9D000081A400000000000000000000000167058077000018D0000000000000003400000000000000000000002D00000000wf-recorder-0.5.0+git1/manpage/wf-recorder.1.Dd $Mdocdate: July 30 2022 $ .Dt WF-RECORDER 1 .Os .Sh NAME .Nm wf-recorder .Nd simple screen recording program for wlroots-based compositors .Sh SYNOPSIS .Nm wf-recorder .Op Fl abcCdDefFghlmopPrRvxX .Op Fl a, -audio Op Ar =DEVICE .Op Fl b, -bframes Ar max_b_frames .Op Fl B, -buffrate Ar buffrate .Op Fl c, -codec Ar output_codec .Op Fl r, -framerate Ar framerate .Op Fl d, -device Ar encoding_device .Op Fl -no-dmabuf .Op Fl D, -no-damage .Op Fl f Ar filename.ext .Op Fl F Ar filter_string .Op Fl g, -geometry Ar geometry .Op Fl h, -help .Op Fl l, -log .Op Fl m, -muxer Ar muxer .Op Fl o, -output Ar output .Op Fl p, -codec-param Op Ar option_param=option_value .Op Fl v, -version .Op Fl x, -pixel-format .Op Fl -audio-backend Ar audio_backend .Op Fl C, -audio-codec Ar output_audio_codec .Op Fl P, -audio-codec-param Op Ar option_param=option_value .Op Fl R, -sample-rate Ar sample_rate .Op Fl X, -sample-format Ar sample_format .Op Fl y, -overwrite .Sh DESCRIPTION .Nm is a tool built to record your screen on Wayland compositors. It makes use of .Sy wlr-screencopy for capturing video and .Xr ffmpeg 1 for encoding it. .Pp In its simplest form, run .Nm to start recording and use .Ql Ctrl+C to stop. This will create a file called .Ql recording.mp4 in the current working directory using the default .Ar codec. .Pp The options are as follows: .Pp .Bl -tag -width Ds -compact .It Fl a , -audio Op Ar =DEVICE Starts recording the screen with audio. .Pp .Ar DEVICE argument is optional. In case you want to specify the PulseAudio device which will capture the audio, you can run this command with the name of that device. You can find your device by running .D1 $ pactl list sources | grep Name .Pp .It Fl b , -bframes Ar max_b_frames Sets the maximum number of B-Frames to use. .It Fl B , -buffrate Ar buffrate Tells the encoder a prediction of what framerate to expect. This preserves VFR and Solves FPS limit issue of some encoders (like svt-av1). Should be set to the same framerate as display. .Pp .It Fl c , -codec Ar output_codec Specifies the codec of the video. Supports GIF output as well. .Pp To modify codec parameters, use .Fl p Ar option_name=option_value .Pp .It Fl r , -framerate Ar framerate Sets hard constant framerate. Will duplicate frames to reach it. This makes the resulting video CFR. Solves FPS limit issue of some encoders. .Pp .It Fl d , -device Ar encoding_device Selects the device to use when encoding the video. .Pp Some drivers report support for .Ql rgb0 data for vaapi input but really only support yuv. Use the .Fl x Ar yuv420 option in addition to the vaapi options to convert the data in software, before sending it to the GPU. .Pp .It Fl -no-dmabuf By default, wf-recorder will try to use only GPU buffers and copies if using a GPU encoder. However, this can cause issues on some systems. In such cases, this option will disable the GPU copy and force a CPU one. .Pp .It Fl D , -no-damage By default, wf-recorder will request a new frame from the compositor only when the screen updates. This results in a much smaller output file, which however has a variable refresh rate. When this option is on, wf-recorder does not use this optimization and continuously records new frames, even if there are no updates on the screen. .Pp .It Fl f Ar filename.ext By using the .Fl f option, the output file will have the name .Ar filename.ext and the file format will be determined by the provided extension. If the extension is not recognized by your .Xr ffmpeg 1 muxers, the command will fail. .Pp You can check the muxers that your .Xr ffmpeg 1 installation supports by running .Dl $ ffmpeg -muxers .Pp .It Fl F , -filter Ar filter_string Set the ffmpeg filter to use. VAAPI requires `scale_vaapi=format=nv12:out_range=full` to work. .Pp .It Fl g , -geometry Ar screen_geometry Selects a specific part of the screen. The format is "x,y WxH". .Pp .It Fl h , -help Prints the help screen. .Pp .It Fl l , -log Generates a log on the current terminal. For debug purposes. .Pp .It Fl m , -muxer Ar muxer Set the output format to a specific muxer instead of detecting it from the filename. .Pp .It Fl o , -output Specify the output where the video is to be recorded. .Pp .It Fl p , -codec-param Op Ar option_name=option_value Change the codec parameters. .Pp .It Fl v , -version Print the version of wf-recorder. .Pp .It Fl x , -pixel-format Ar pixel_format Set the output pixel format. .Pp List available formats using .Dl $ ffmpeg -pix_fmts .Pp .It Fl -audio-backend Ar audio_backend Specifies the audio backend to be used when -a is set. .Pp .It Fl C , -audio-codec Ar output_audio_codec Specifies the codec of the audio. .Pp .It Fl P , -audio-codec-param Op Ar option_name=option_value Change the audio codec parameters. .Pp .It Fl R , -sample-rate Ar sample_rate Changes the audio sample rate, in HZ. The default value is 48000. .Pp .It Fl X , -sample-format Ar sample_format Set the output audio sample format. .Pp List available formats using .Dl $ ffmpeg -sample_fmts .Pp .It Fl y , -overwrite Force overwriting the output file without prompting. .El .Sh EXAMPLES To select a specific part of the screen you can either use .Fl -g Ar geometry or use https://github.com/emersion/slurp for interactive selection of the screen area that will be recorded: .Dl $ wf-recorder -g "$(slurp)" .Pp You can record screen and sound simultaneously with .Dl $ wf-recorder --audio --file=recording_with_audio.mp4 .Pp To specify an audio device, use the .Fl -a<DEVICE> or .Fl --audio=<DEVICE> options. .Pp To specify a .Ar codec use the .Fl c Ar codec option. To modify codec parameters, .Fl p .Ar option_name=option_value. .Pp To set a specific output format, use the .Fl m, -muxer option. For example, to output to a .Sy video4linux2 loopback you might use: .Dl $ wf-recorder --muxer=v4l2 --codec=rawvideo --file=/dev/video2 .Pp To use GPU encoding, use a VAAPI codec (for ex. .Ql h264_vaapi ) and specify a GPU device to use with the .Fl d option: .Dl $ wf-recorder -f test-vaapi.mkv -c h264_vaapi -d /dev/dri/renderD128 .Pp Some drivers report support for .Ql rgb0 data for .Ql vaapi input but really only support yuv planar formats. In this case, use the .Fl x Ar yuv420p option in addition to the .Ql vaapi options to convert the data to yuv planar data before sending it to the GPU. .Sh SEE ALSO .Xr ffmpeg 1 , .Xr pactl 1 07070103197D9E000081A4000000000000000000000001670580770000136B000000000000003400000000000000000000002300000000wf-recorder-0.5.0+git1/meson.buildproject( 'wf-recorder', 'c', 'cpp', version: '0.5.0', license: 'MIT', meson_version: '>=0.54.0', default_options: [ 'cpp_std=c++17', 'c_std=c11', 'warning_level=2', 'werror=false', ], ) conf_data = configuration_data() conf_data.set('default_codec', get_option('default_codec')) conf_data.set('default_pix_fmt', get_option('default_pixel_format')) conf_data.set('default_audio_codec', get_option('default_audio_codec')) conf_data.set('default_audio_sample_rate', get_option('default_audio_sample_rate')) conf_data.set('default_container_format', get_option('default_container_format')) conf_data.set('fallback_audio_sample_fmt', get_option('fallback_audio_sample_fmt')) version = '"@0@"'.format(meson.project_version()) git = find_program('git', native: true, required: false) if git.found() git_commit = run_command([git, 'rev-parse', '--short', 'HEAD'], check: false) git_branch = run_command([git, 'rev-parse', '--abbrev-ref', 'HEAD'], check: false) if git_commit.returncode() == 0 and git_branch.returncode() == 0 version = '"@0@-@1@ (" __DATE__ ", branch \'@2@\')"'.format( meson.project_version(), git_commit.stdout().strip(), git_branch.stdout().strip(), ) endif endif add_project_arguments('-DWFRECORDER_VERSION=@0@'.format(version), language: 'cpp') include_directories(['.']) add_project_arguments(['-Wno-deprecated-declarations'], language: 'cpp') project_sources = ['src/frame-writer.cpp', 'src/main.cpp', 'src/averr.c'] wayland_client = dependency('wayland-client', version: '>=1.20') wayland_protos = dependency('wayland-protocols', version: '>=1.14') audio_backends = { 'pulse': { 'dependency': dependency('libpulse-simple', required: false), 'sources': ['src/pulse.cpp'], 'define': 'HAVE_PULSE' }, 'pipewire': { 'dependency': dependency('libpipewire-0.3', version: '>=1.0.5', required: false), 'sources': ['src/pipewire.cpp'], 'define': 'HAVE_PIPEWIRE' } } default_audio_backend = get_option('default_audio_backend') message('Using default audio backend: @0@'.format(default_audio_backend)) have_audio = false audio_deps = [] foreach backend_name, backend_data : audio_backends if default_audio_backend == backend_name and not backend_data['dependency'].found() error('Default audio backend set to @0@, but @1@ dependency was not found!'.format(backend_name, backend_data['dependency'].name())) endif if default_audio_backend == backend_name and get_option(backend_name).disabled() error('Default audio backend set to @0@, but @1@ support is disabled!'.format(backend_name, backend_name)) endif if get_option(backend_name).enabled() and not backend_data['dependency'].found() error('@0@ support is enabled, but @1@ dependency was not found!'.format(backend_name, backend_data['dependency'].name())) endif if backend_data['dependency'].found() and not get_option(backend_name).disabled() conf_data.set(backend_data['define'], true) project_sources += backend_data['sources'] audio_deps += backend_data['dependency'] have_audio = true else conf_data.set(backend_data['define'], false) endif endforeach if have_audio conf_data.set('HAVE_AUDIO', true) project_sources += 'src/audio.cpp' if default_audio_backend == 'auto' if conf_data.get('HAVE_PULSE') default_audio_backend = 'pulse' else foreach backend_name, backend_data : audio_backends if conf_data.get(backend_data['define']) default_audio_backend = backend_name break endif endforeach endif endif endif conf_data.set('default_audio_backend', default_audio_backend) libavutil = dependency('libavutil') libavcodec = dependency('libavcodec') libavformat = dependency('libavformat') libavdevice = dependency('libavdevice', required: false) libavfilter = dependency('libavfilter') swr = dependency('libswresample') threads = dependency('threads') gbm = dependency('gbm') drm = dependency('libdrm') conf_data.set('HAVE_LIBAVDEVICE', libavdevice.found()) configure_file(input: 'config.h.in', output: 'config.h', configuration: conf_data) install_data('manpage/wf-recorder.1', install_dir : join_paths(get_option('prefix'), get_option('mandir'), 'man1')) subdir('proto') dependencies = [ wayland_client, wayland_protos, libavutil, libavcodec, libavformat, libavdevice, libavfilter, wf_protos, threads, swr, gbm, drm ] + audio_deps executable('wf-recorder', project_sources, dependencies: dependencies, install: true) summary = [ '', '----------------', 'wf-recorder @0@'.format(meson.project_version()), '----------------', 'Default audio backend: @0@'.format(default_audio_backend), ] foreach backend_name, backend_data : audio_backends summary += [' - @0@: @1@'.format(backend_name, conf_data.get(backend_data['define']))] endforeach message('\n'.join(summary)) 07070103197D9F000081A40000000000000000000000016705807700000456000000000000003400000000000000000000002900000000wf-recorder-0.5.0+git1/meson_options.txtoption('default_codec', type: 'string', value: 'libx264', description: 'Codec that will be used by default') option('default_pixel_format', type: 'string', value: '', description: 'Pixel format that will be used by default') option('default_audio_codec', type: 'string', value: 'aac', description: 'Audio codec that will be used by default') option('default_audio_sample_rate', type: 'integer', value: 48000, description: 'Audio sample rate that will be used by default') option('default_container_format', type: 'string', value: 'mkv', description: 'Container file format that will be used by default') option('fallback_audio_sample_fmt', type: 'string', value: 's16', description: 'Fallback audio sample format that will be used if wf-recorder cannot determine the sample formats supported by a codec') option('pulse', type: 'feature', value: 'auto', description: 'Enable Pulseaudio') option('pipewire', type: 'feature', value: 'auto', description: 'Enable PipeWire') option('default_audio_backend', type: 'combo', choices: ['auto', 'pulse', 'pipewire'], value: 'auto', description: 'Default audio backend') 07070103197DA0000041ED0000000000000000000000016705807700000000000000000000003400000000000000000000001D00000000wf-recorder-0.5.0+git1/proto07070103197DA1000081A4000000000000000000000001670580770000045F000000000000003400000000000000000000002900000000wf-recorder-0.5.0+git1/proto/meson.buildwl_protocol_dir = wayland_protos.get_variable(pkgconfig: 'pkgdatadir', internal: 'pkgdatadir') wayland_scanner = find_program('wayland-scanner') wayland_scanner_code = generator( wayland_scanner, output: '@BASENAME@-protocol.c', arguments: ['private-code', '@INPUT@', '@OUTPUT@'], ) wayland_scanner_client = generator( wayland_scanner, output: '@BASENAME@-client-protocol.h', arguments: ['client-header', '@INPUT@', '@OUTPUT@'], ) client_protocols = [ [wl_protocol_dir, 'unstable/xdg-output/xdg-output-unstable-v1.xml'], [wl_protocol_dir, 'unstable/linux-dmabuf/linux-dmabuf-unstable-v1.xml'], 'wlr-screencopy-unstable-v1.xml', ] wl_protos_client_src = [] wl_protos_headers = [] foreach p : client_protocols xml = join_paths(p) wl_protos_client_src += wayland_scanner_code.process(xml) wl_protos_headers += wayland_scanner_client.process(xml) endforeach lib_wl_protos = static_library('wl_protos', wl_protos_client_src + wl_protos_headers, dependencies: [wayland_client]) # for the include directory wf_protos = declare_dependency( link_with: lib_wl_protos, sources: wl_protos_headers, ) 07070103197DA2000081A400000000000000000000000167058077000027B6000000000000003400000000000000000000003C00000000wf-recorder-0.5.0+git1/proto/wlr-screencopy-unstable-v1.xml<?xml version="1.0" encoding="UTF-8"?> <protocol name="wlr_screencopy_unstable_v1"> <copyright> Copyright © 2018 Simon Ser Copyright © 2019 Andri Yngvason Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. </copyright> <description summary="screen content capturing on client buffers"> This protocol allows clients to ask the compositor to copy part of the screen content to a client buffer. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. </description> <interface name="zwlr_screencopy_manager_v1" version="3"> <description summary="manager to inform clients and begin capturing"> This object is a manager which offers requests to start capturing from a source. </description> <request name="capture_output"> <description summary="capture an output"> Capture the next frame of an entire output. </description> <arg name="frame" type="new_id" interface="zwlr_screencopy_frame_v1"/> <arg name="overlay_cursor" type="int" summary="composite cursor onto the frame"/> <arg name="output" type="object" interface="wl_output"/> </request> <request name="capture_output_region"> <description summary="capture an output's region"> Capture the next frame of an output's region. The region is given in output logical coordinates, see xdg_output.logical_size. The region will be clipped to the output's extents. </description> <arg name="frame" type="new_id" interface="zwlr_screencopy_frame_v1"/> <arg name="overlay_cursor" type="int" summary="composite cursor onto the frame"/> <arg name="output" type="object" interface="wl_output"/> <arg name="x" type="int"/> <arg name="y" type="int"/> <arg name="width" type="int"/> <arg name="height" type="int"/> </request> <request name="destroy" type="destructor"> <description summary="destroy the manager"> All objects created by the manager will still remain valid, until their appropriate destroy request has been called. </description> </request> </interface> <interface name="zwlr_screencopy_frame_v1" version="3"> <description summary="a frame ready for copy"> This object represents a single frame. When created, a series of buffer events will be sent, each representing a supported buffer type. The "buffer_done" event is sent afterwards to indicate that all supported buffer types have been enumerated. The client will then be able to send a "copy" request. If the capture is successful, the compositor will send a "flags" followed by a "ready" event. For objects version 2 or lower, wl_shm buffers are always supported, ie. the "buffer" event is guaranteed to be sent. If the capture failed, the "failed" event is sent. This can happen anytime before the "ready" event. Once either a "ready" or a "failed" event is received, the client should destroy the frame. </description> <event name="buffer"> <description summary="wl_shm buffer information"> Provides information about wl_shm buffer parameters that need to be used for this frame. This event is sent once after the frame is created if wl_shm buffers are supported. </description> <arg name="format" type="uint" enum="wl_shm.format" summary="buffer format"/> <arg name="width" type="uint" summary="buffer width"/> <arg name="height" type="uint" summary="buffer height"/> <arg name="stride" type="uint" summary="buffer stride"/> </event> <request name="copy"> <description summary="copy the frame"> Copy the frame to the supplied buffer. The buffer must have a the correct size, see zwlr_screencopy_frame_v1.buffer and zwlr_screencopy_frame_v1.linux_dmabuf. The buffer needs to have a supported format. If the frame is successfully copied, a "flags" and a "ready" events are sent. Otherwise, a "failed" event is sent. </description> <arg name="buffer" type="object" interface="wl_buffer"/> </request> <enum name="error"> <entry name="already_used" value="0" summary="the object has already been used to copy a wl_buffer"/> <entry name="invalid_buffer" value="1" summary="buffer attributes are invalid"/> </enum> <enum name="flags" bitfield="true"> <entry name="y_invert" value="1" summary="contents are y-inverted"/> </enum> <event name="flags"> <description summary="frame flags"> Provides flags about the frame. This event is sent once before the "ready" event. </description> <arg name="flags" type="uint" enum="flags" summary="frame flags"/> </event> <event name="ready"> <description summary="indicates frame is available for reading"> Called as soon as the frame is copied, indicating it is available for reading. This event includes the time at which presentation happened at. The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. The seconds part may have an arbitrary offset at start. After receiving this event, the client should destroy the object. </description> <arg name="tv_sec_hi" type="uint" summary="high 32 bits of the seconds part of the timestamp"/> <arg name="tv_sec_lo" type="uint" summary="low 32 bits of the seconds part of the timestamp"/> <arg name="tv_nsec" type="uint" summary="nanoseconds part of the timestamp"/> </event> <event name="failed"> <description summary="frame copy failed"> This event indicates that the attempted frame copy has failed. After receiving this event, the client should destroy the object. </description> </event> <request name="destroy" type="destructor"> <description summary="delete this object, used or not"> Destroys the frame. This request can be sent at any time by the client. </description> </request> <!-- Version 2 additions --> <request name="copy_with_damage" since="2"> <description summary="copy the frame when it's damaged"> Same as copy, except it waits until there is damage to copy. </description> <arg name="buffer" type="object" interface="wl_buffer"/> </request> <event name="damage" since="2"> <description summary="carries the coordinates of the damaged region"> This event is sent right before the ready event when copy_with_damage is requested. It may be generated multiple times for each copy_with_damage request. The arguments describe a box around an area that has changed since the last copy request that was derived from the current screencopy manager instance. The union of all regions received between the call to copy_with_damage and a ready event is the total damage since the prior ready event. </description> <arg name="x" type="uint" summary="damaged x coordinates"/> <arg name="y" type="uint" summary="damaged y coordinates"/> <arg name="width" type="uint" summary="current width"/> <arg name="height" type="uint" summary="current height"/> </event> <!-- Version 3 additions --> <event name="linux_dmabuf" since="3"> <description summary="linux-dmabuf buffer information"> Provides information about linux-dmabuf buffer parameters that need to be used for this frame. This event is sent once after the frame is created if linux-dmabuf buffers are supported. </description> <arg name="format" type="uint" summary="fourcc pixel format"/> <arg name="width" type="uint" summary="buffer width"/> <arg name="height" type="uint" summary="buffer height"/> </event> <event name="buffer_done" since="3"> <description summary="all buffer types reported"> This event is sent once after all buffer events have been sent. The client should proceed to create a buffer of one of the supported types, and send a "copy" request. </description> </event> </interface> </protocol> 07070103197DA3000041ED0000000000000000000000016705807700000000000000000000003400000000000000000000001B00000000wf-recorder-0.5.0+git1/src07070103197DA4000081A40000000000000000000000016705807700000292000000000000003400000000000000000000002500000000wf-recorder-0.5.0+git1/src/audio.cpp#include "audio.hpp" #include "config.h" #ifdef HAVE_PULSE #include "pulse.hpp" #endif #ifdef HAVE_PIPEWIRE #include "pipewire.hpp" #endif AudioReader *AudioReader::create(AudioReaderParams params) { #ifdef HAVE_PIPEWIRE if (params.audio_backend == "pipewire") { AudioReader *pw = new PipeWireReader; pw->params = params; if (pw->init()) return pw; delete pw; } #endif #ifdef HAVE_PULSE if (params.audio_backend == "pulse") { AudioReader *pa = new PulseReader; pa->params = params; if (pa->init()) return pa; delete pa; } #endif return nullptr; } 07070103197DA5000081A40000000000000000000000016705807700000267000000000000003400000000000000000000002500000000wf-recorder-0.5.0+git1/src/audio.hpp#ifndef AUDIO_HPP #define AUDIO_HPP #include <stdlib.h> #include <stdint.h> #include "config.h" #include <string> struct AudioReaderParams { size_t audio_frame_size; uint32_t sample_rate; /* Can be NULL */ char *audio_source; std::string audio_backend = DEFAULT_AUDIO_BACKEND; }; class AudioReader { public: virtual ~AudioReader() {} virtual bool init() = 0; virtual void start() = 0; AudioReaderParams params; static AudioReader *create(AudioReaderParams params); virtual uint64_t get_time_base() const { return 0; } }; #endif /* end of include guard: AUDIO_HPP */ 07070103197DA6000081A400000000000000000000000167058077000000A3000000000000003400000000000000000000002300000000wf-recorder-0.5.0+git1/src/averr.c#include "averr.h" const char* averr(int err) { static char buf[AV_ERROR_MAX_STRING_SIZE]; av_make_error_string(buf, sizeof(buf), err); return buf; } 07070103197DA7000081A400000000000000000000000167058077000000D2000000000000003400000000000000000000002300000000wf-recorder-0.5.0+git1/src/averr.h#include <libavutil/error.h> /* the macro av_err2str doesn't work in C++, so we have a wrapper for it here */ #ifdef __cplusplus extern "C" { #endif const char* averr(int err); #ifdef __cplusplus } #endif 07070103197DA8000081A400000000000000000000000167058077000009D7000000000000003400000000000000000000002B00000000wf-recorder-0.5.0+git1/src/buffer-pool.hpp#pragma once #include <array> #include <mutex> #include <atomic> #include <type_traits> class buffer_pool_buf { public: bool ready_capture() const { return released; } bool ready_encode() const { return available; } std::atomic<bool> released{true}; // if the buffer can be used to store new pending frames std::atomic<bool> available{false}; // if the buffer can be used to feed the encoder }; template <class T, int N> class buffer_pool { public: static_assert(std::is_base_of<buffer_pool_buf, T>::value, "T must be subclass of buffer_pool_buf"); buffer_pool() { for (size_t i = 0; i < bufs_size; ++i) { bufs[i] = new T; } } ~buffer_pool() { for (size_t i = 0; i < N; ++i) { delete bufs[i]; } } size_t size() const { return N; } const T* at(size_t i) const { return bufs[i]; } T& capture() { std::lock_guard<std::mutex> lock(mutex); return *bufs[capture_idx]; } T& encode() { std::lock_guard<std::mutex> lock(mutex); return *bufs[encode_idx]; } // Signal that the current capture buffer has been successfully obtained // from the compositor and select the next buffer to capture in. T& next_capture() { std::lock_guard<std::mutex> lock(mutex); bufs[capture_idx]->released = false; bufs[capture_idx]->available = true; size_t next = (capture_idx + 1) % bufs_size; if (!bufs[next]->ready_capture() && bufs_size < N) { bufs_size++; next = (capture_idx + 1) % bufs_size; for (size_t i = N - 1; i > next; --i) { bufs[i] = bufs[i - 1]; if (encode_idx == i - 1) { encode_idx = i; } } bufs[next] = new T; } capture_idx = next; return *bufs[capture_idx]; } // Signal that the encode buffer has been submitted for encoding // and select the next buffer for encoding. T& next_encode() { std::lock_guard<std::mutex> lock(mutex); bufs[encode_idx]->available = false; bufs[encode_idx]->released = true; encode_idx = (encode_idx + 1) % bufs_size; return *bufs[encode_idx]; } private: std::mutex mutex; std::array<T*, N> bufs; size_t bufs_size = 2; size_t capture_idx = 0; size_t encode_idx = 0; }; 07070103197DAF000081A400000000000000000000000167058077000078E7000000000000003400000000000000000000002C00000000wf-recorder-0.5.0+git1/src/frame-writer.cpp// Adapted from https://stackoverflow.com/questions/34511312/how-to-encode-a-video-from-several-images-generated-in-a-c-program-without-wri // (Later) adapted from https://github.com/apc-llc/moviemaker-cpp // // Audio encoding - thanks to wlstream, a lot of the code/ideas are taken from there #include <iostream> #include "frame-writer.hpp" #include <vector> #include <queue> #include <cstring> #include <sstream> #include "averr.h" #include <gbm.h> #define HAVE_CH_LAYOUT (LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 28, 100)) static const AVRational US_RATIONAL{1,1000000} ; // av_register_all was deprecated in 58.9.100, removed in 59.0.100 #if LIBAVCODEC_VERSION_INT < AV_VERSION_INT(59, 0, 100) class FFmpegInitialize { public : FFmpegInitialize() { // Loads the whole database of available codecs and formats. av_register_all(); } }; static FFmpegInitialize ffmpegInitialize; #endif void FrameWriter::init_hw_accel() { int ret = av_hwdevice_ctx_create(&this->hw_device_context, av_hwdevice_find_type_by_name("vaapi"), params.hw_device.c_str(), NULL, 0); if (ret != 0) { std::cerr << "Failed to create hw encoding device " << params.hw_device << ": " << averr(ret) << std::endl; std::exit(-1); } } void FrameWriter::load_codec_options(AVDictionary **dict) { using CodecOptions = std::map<std::string, std::string>; static const CodecOptions default_x264_options = { {"tune", "zerolatency"}, {"preset", "ultrafast"}, {"crf", "20"}, }; static const CodecOptions default_libvpx_options = { {"cpu-used", "5"}, {"deadline", "realtime"}, }; static const std::map<std::string, const CodecOptions&> default_codec_options = { {"libx264", default_x264_options}, {"libx265", default_x264_options}, {"libvpx", default_libvpx_options}, }; for (const auto& opts : default_codec_options) { if (params.codec.find(opts.first) != std::string::npos) { for (const auto& param : opts.second) { if (!params.codec_options.count(param.first)) params.codec_options[param.first] = param.second; } break; } } for (auto& opt : params.codec_options) { std::cerr << "Setting codec option: " << opt.first << "=" << opt.second << std::endl; av_dict_set(dict, opt.first.c_str(), opt.second.c_str(), 0); } } void FrameWriter::load_audio_codec_options(AVDictionary **dict) { for (auto& opt : params.audio_codec_options) { std::cerr << "Setting codec option: " << opt.first << "=" << opt.second << std::endl; av_dict_set(dict, opt.first.c_str(), opt.second.c_str(), 0); } } bool is_fmt_supported(AVPixelFormat fmt, const AVPixelFormat *supported) { for (int i = 0; supported[i] != AV_PIX_FMT_NONE; i++) { if (supported[i] == fmt) return true; } return false; } AVPixelFormat FrameWriter::get_input_format() { switch (params.format) { case INPUT_FORMAT_BGR0: return AV_PIX_FMT_BGR0; case INPUT_FORMAT_RGB0: return AV_PIX_FMT_RGB0; case INPUT_FORMAT_BGR8: return AV_PIX_FMT_RGB24; case INPUT_FORMAT_RGB565: return AV_PIX_FMT_RGB565LE; case INPUT_FORMAT_BGR565: return AV_PIX_FMT_BGR565LE; #if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(56, 55, 100) case INPUT_FORMAT_X2RGB10: return AV_PIX_FMT_X2RGB10LE; #endif #if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 7, 100) case INPUT_FORMAT_X2BGR10: return AV_PIX_FMT_X2BGR10LE; #endif case INPUT_FORMAT_RGBX64: return AV_PIX_FMT_RGBA64LE; case INPUT_FORMAT_BGRX64: return AV_PIX_FMT_BGRA64LE; #if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 33, 101) case INPUT_FORMAT_RGBX64F: return AV_PIX_FMT_RGBAF16LE; #endif case INPUT_FORMAT_DMABUF: return AV_PIX_FMT_VAAPI; default: std::cerr << "Unknown format: " << params.format << std::endl; std::exit(-1); } } static const struct { int drm; AVPixelFormat av; } drm_av_format_table [] = { { GBM_FORMAT_ARGB8888, AV_PIX_FMT_BGRA }, { GBM_FORMAT_XRGB8888, AV_PIX_FMT_BGR0 }, { GBM_FORMAT_ABGR8888, AV_PIX_FMT_RGBA }, { GBM_FORMAT_XBGR8888, AV_PIX_FMT_RGB0 }, { GBM_FORMAT_RGBA8888, AV_PIX_FMT_ABGR }, { GBM_FORMAT_RGBX8888, AV_PIX_FMT_0BGR }, { GBM_FORMAT_BGRA8888, AV_PIX_FMT_ARGB }, { GBM_FORMAT_BGRX8888, AV_PIX_FMT_0RGB }, { GBM_FORMAT_XRGB2101010, AV_PIX_FMT_X2RGB10 }, }; static AVPixelFormat get_drm_av_format(int fmt) { for (size_t i = 0; i < sizeof(drm_av_format_table) / sizeof(drm_av_format_table[0]); ++i) { if (drm_av_format_table[i].drm == fmt) { return drm_av_format_table[i].av; } } std::cerr << "Failed to find AV format for" << fmt << std::endl; return AV_PIX_FMT_RGBA; } AVPixelFormat FrameWriter::lookup_pixel_format(std::string pix_fmt) { AVPixelFormat fmt = av_get_pix_fmt(pix_fmt.c_str()); if (fmt != AV_PIX_FMT_NONE) return fmt; std::cerr << "Failed to find the pixel format: " << pix_fmt << std::endl; std::exit(-1); } AVPixelFormat FrameWriter::handle_buffersink_pix_fmt(const AVCodec *codec) { /* If using the default codec and no pixel format is specified, * set the format to yuv420p for web friendly output by default */ if (params.codec == DEFAULT_CODEC && params.pix_fmt.empty()) params.pix_fmt = "yuv420p"; // Return with user chosen format if (!params.pix_fmt.empty()) return lookup_pixel_format(params.pix_fmt); auto in_fmt = get_input_format(); /* For codecs such as rawvideo no supported formats are listed */ if (!codec->pix_fmts) return in_fmt; /* If the codec supports getting the appropriate RGB format * directly, we want to use it since we don't have to convert data */ if (is_fmt_supported(in_fmt, codec->pix_fmts)) return in_fmt; /* Choose the format supported by the codec which best approximates the * input fmt. */ AVPixelFormat best_format = AV_PIX_FMT_NONE; for (int i = 0; codec->pix_fmts[i] != AV_PIX_FMT_NONE; i++) { int loss = 0; best_format = av_find_best_pix_fmt_of_2(best_format, codec->pix_fmts[i], in_fmt, false, &loss); } return best_format; } void FrameWriter::init_video_filters(const AVCodec *codec) { if (params.framerate != 0){ if (params.video_filter != "null" && params.video_filter.find("fps") == std::string::npos) { params.video_filter += ",fps=" + std::to_string(params.framerate); } else if (params.video_filter == "null"){ params.video_filter = "fps=" + std::to_string(params.framerate); } } this->videoFilterGraph = avfilter_graph_alloc(); av_opt_set(videoFilterGraph, "scale_sws_opts", "flags=fast_bilinear:src_range=1:dst_range=1", 0); const AVFilter* source = avfilter_get_by_name("buffer"); const AVFilter* sink = avfilter_get_by_name("buffersink"); if (!source || !sink) { std::cerr << "filtering source or sink element not found\n"; exit(-1); } if (this->hw_device_context) { this->hw_frame_context_in = av_hwframe_ctx_alloc(this->hw_device_context); AVHWFramesContext *hwfc = reinterpret_cast<AVHWFramesContext*>(this->hw_frame_context_in->data); hwfc->format = AV_PIX_FMT_VAAPI; hwfc->sw_format = get_drm_av_format(params.drm_format); hwfc->width = params.width; hwfc->height = params.height; int err = av_hwframe_ctx_init(this->hw_frame_context_in); if (err < 0) { std::cerr << "Cannot create hw frames context: " << averr(err) << std::endl; exit(-1); } } // Build the configuration of the 'buffer' filter. // See: ffmpeg -h filter=buffer // See: https://ffmpeg.org/ffmpeg-filters.html#buffer std::stringstream buffer_filter_config; buffer_filter_config << "video_size=" << params.width << "x" << params.height; buffer_filter_config << ":pix_fmt=" << (int)this->get_input_format(); buffer_filter_config << ":time_base=" << US_RATIONAL.num << "/" << US_RATIONAL.den; if (params.buffrate != 0) { buffer_filter_config << ":frame_rate=" << params.buffrate; } buffer_filter_config << ":pixel_aspect=1/1"; int err = avfilter_graph_create_filter(&this->videoFilterSourceCtx, source, "Source", buffer_filter_config.str().c_str(), NULL, this->videoFilterGraph); if (err < 0) { std::cerr << "Cannot create video filter in: " << averr(err) << std::endl;; exit(-1); } AVBufferSrcParameters *p = av_buffersrc_parameters_alloc(); memset(p, 0, sizeof(*p)); p->format = AV_PIX_FMT_NONE; p->hw_frames_ctx = this->hw_frame_context_in; err = av_buffersrc_parameters_set(this->videoFilterSourceCtx, p); av_free(p); if (err < 0) { std::cerr << "Cannot set hwcontext filter in: " << averr(err) << std::endl;; exit(-1); } err = avfilter_graph_create_filter(&this->videoFilterSinkCtx, sink, "Sink", NULL, NULL, this->videoFilterGraph); if (err < 0) { std::cerr << "Cannot create video filter out: " << averr(err) << std::endl;; exit(-1); } // We also need to tell the sink which pixel formats are supported. // by the video encoder. codevIndicate to our sink pixel formats // are accepted by our codec. const AVPixelFormat picked_pix_fmt[] = { handle_buffersink_pix_fmt(codec), AV_PIX_FMT_NONE }; err = av_opt_set_int_list(this->videoFilterSinkCtx, "pix_fmts", picked_pix_fmt, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN); if (err < 0) { std::cerr << "Failed to set pix_fmts: " << averr(err) << std::endl;; exit(-1); } // Create the connections to the filter graph // // The in/out swap is not a mistake: // // ---------- ----------------------------- -------- // | Source | ----> | in -> filter_graph -> out | ---> | Sink | // ---------- ----------------------------- -------- // // The 'in' of filter_graph is the output of the Source buffer // The 'out' of filter_graph is the input of the Sink buffer // AVFilterInOut *outputs = avfilter_inout_alloc(); outputs->name = av_strdup("in"); outputs->filter_ctx = this->videoFilterSourceCtx; outputs->pad_idx = 0; outputs->next = NULL; AVFilterInOut *inputs = avfilter_inout_alloc(); inputs->name = av_strdup("out"); inputs->filter_ctx = this->videoFilterSinkCtx; inputs->pad_idx = 0; inputs->next = NULL; if (!outputs->name || !inputs->name) { std::cerr << "Failed to parse allocate inout filter links" << std::endl; exit(-1); } std::cerr << "Using video filter: " << params.video_filter << std::endl; err = avfilter_graph_parse_ptr(this->videoFilterGraph, params.video_filter.c_str(), &inputs, &outputs, NULL); if (err < 0) { std::cerr << "Failed to parse graph filter: " << averr(err) << std::endl;; exit(-1) ; } // Filters that create HW frames ('hwupload', 'hwmap', ...) need // AVBufferRef in their hw_device_ctx. Unfortunately, there is no // simple API to do that for filters created by avfilter_graph_parse_ptr(). // The code below is inspired from ffmpeg_filter.c if (this->hw_device_context) { for (unsigned i=0; i< this->videoFilterGraph->nb_filters; i++) { this->videoFilterGraph->filters[i]->hw_device_ctx = av_buffer_ref(this->hw_device_context); } } err = avfilter_graph_config(this->videoFilterGraph, NULL); if (err<0) { std::cerr << "Failed to configure graph filter: " << averr(err) << std::endl;; exit(-1) ; } if (params.enable_ffmpeg_debug_output) { std::cerr << std::string(80,'#') << std::endl ; std::cerr << avfilter_graph_dump(this->videoFilterGraph,0) << "\n"; std::cerr << std::string(80,'#') << std::endl ; } // The (input of the) sink is the output of the whole filter. AVFilterLink * filter_output = this->videoFilterSinkCtx->inputs[0] ; this->videoCodecCtx->width = filter_output->w; this->videoCodecCtx->height = filter_output->h; this->videoCodecCtx->pix_fmt = (AVPixelFormat)filter_output->format; this->videoCodecCtx->time_base = filter_output->time_base; this->videoCodecCtx->framerate = AVRational{1,0}; this->videoCodecCtx->sample_aspect_ratio = filter_output->sample_aspect_ratio; this->hw_frame_context = av_buffersink_get_hw_frames_ctx( this->videoFilterSinkCtx); avfilter_inout_free(&inputs); avfilter_inout_free(&outputs); } void FrameWriter::init_video_stream() { AVDictionary *options = NULL; load_codec_options(&options); const AVCodec* codec = avcodec_find_encoder_by_name(params.codec.c_str()); if (!codec) { std::cerr << "Failed to find the given codec: " << params.codec << std::endl; std::exit(-1); } videoStream = avformat_new_stream(fmtCtx, codec); if (!videoStream) { std::cerr << "Failed to open stream" << std::endl; std::exit(-1); } videoCodecCtx = avcodec_alloc_context3(codec); videoCodecCtx->width = params.width; videoCodecCtx->height = params.height; videoCodecCtx->time_base = US_RATIONAL; videoCodecCtx->color_range = AVCOL_RANGE_JPEG; if (params.framerate) { std::cerr << "Framerate: " << params.framerate << std::endl; } if (params.bframes != -1) videoCodecCtx->max_b_frames = params.bframes; if (!params.hw_device.empty()) { init_hw_accel(); } // The filters need to be initialized after we have initialized // videoCodecCtx. // // After loading the filters, we should update the hw frames ctx. init_video_filters(codec); if (this->hw_frame_context) { videoCodecCtx->hw_frames_ctx = av_buffer_ref(this->hw_frame_context); } if (fmtCtx->oformat->flags & AVFMT_GLOBALHEADER) { videoCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; } int ret; char err[256]; if ((ret = avcodec_open2(videoCodecCtx, codec, &options)) < 0) { av_strerror(ret, err, 256); std::cerr << "avcodec_open2 failed: " << err << std::endl; std::exit(-1); } av_dict_free(&options); if ((ret = avcodec_parameters_from_context(videoStream->codecpar, videoCodecCtx)) < 0) { av_strerror(ret, err, 256); std::cerr << "avcodec_parameters_from_context failed: " << err << std::endl; std::exit(-1); } } #ifdef HAVE_AUDIO #if HAVE_CH_LAYOUT static uint64_t get_codec_channel_layout(const AVCodec *codec) { int i = 0; if (!codec->ch_layouts) return AV_CH_LAYOUT_STEREO; while (1) { if (!av_channel_layout_check(&codec->ch_layouts[i])) break; if (codec->ch_layouts[i].u.mask == AV_CH_LAYOUT_STEREO) return codec->ch_layouts[i].u.mask; i++; } return codec->ch_layouts[0].u.mask; } #else static uint64_t get_codec_channel_layout(const AVCodec *codec) { int i = 0; if (!codec->channel_layouts) return AV_CH_LAYOUT_STEREO; while (1) { if (!codec->channel_layouts[i]) break; if (codec->channel_layouts[i] == AV_CH_LAYOUT_STEREO) return codec->channel_layouts[i]; i++; } return codec->channel_layouts[0]; } #endif static enum AVSampleFormat get_codec_auto_sample_fmt(const AVCodec *codec) { int i = 0; if (!codec->sample_fmts) return av_get_sample_fmt(FALLBACK_AUDIO_SAMPLE_FMT); while (1) { if (codec->sample_fmts[i] == -1) break; if (av_get_bytes_per_sample(codec->sample_fmts[i]) >= 2) return codec->sample_fmts[i]; i++; } return codec->sample_fmts[0]; } bool check_fmt_available(const AVCodec *codec, AVSampleFormat fmt){ for (const enum AVSampleFormat *sample_ptr = codec -> sample_fmts; *sample_ptr != -1; sample_ptr++) { if (*sample_ptr == fmt) { return true; } } return false; } static enum AVSampleFormat convert_codec_sample_fmt(const AVCodec *codec, std::string requested_fmt) { static enum AVSampleFormat converted_fmt = av_get_sample_fmt(requested_fmt.c_str()); if (converted_fmt == AV_SAMPLE_FMT_NONE) { std::cerr << "Failed to find the given sample format: " << requested_fmt << std::endl; std::exit(-1); } else if (!codec->sample_fmts || check_fmt_available(codec, converted_fmt)) { std::cerr << "Using sample format " << av_get_sample_fmt_name(converted_fmt) << " for audio codec " << codec->name << std::endl; return converted_fmt; } else { std::cerr << "Codec " << codec->name << " does not support sample format " << av_get_sample_fmt_name(converted_fmt) << std::endl; std::exit(-1); } } void FrameWriter::init_audio_stream() { AVDictionary *options = NULL; load_codec_options(&options); const AVCodec* codec = avcodec_find_encoder_by_name(params.audio_codec.c_str()); if (!codec) { std::cerr << "Failed to find the given audio codec: " << params.audio_codec << std::endl; std::exit(-1); } audioStream = avformat_new_stream(fmtCtx, codec); if (!audioStream) { std::cerr << "Failed to open audio stream" << std::endl; std::exit(-1); } audioCodecCtx = avcodec_alloc_context3(codec); if (params.sample_fmt.size() == 0) { audioCodecCtx->sample_fmt = get_codec_auto_sample_fmt(codec); std::cerr << "Choosing sample format " << av_get_sample_fmt_name(audioCodecCtx->sample_fmt) << " for audio codec " << codec->name << std::endl; } else { audioCodecCtx->sample_fmt = convert_codec_sample_fmt(codec, params.sample_fmt); } #if HAVE_CH_LAYOUT av_channel_layout_from_mask(&audioCodecCtx->ch_layout, get_codec_channel_layout(codec)); #else audioCodecCtx->channel_layout = get_codec_channel_layout(codec); audioCodecCtx->channels = av_get_channel_layout_nb_channels(audioCodecCtx->channel_layout); #endif audioCodecCtx->sample_rate = params.sample_rate; audioCodecCtx->time_base = (AVRational) { 1, 1000 }; if (fmtCtx->oformat->flags & AVFMT_GLOBALHEADER) audioCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; int err; if ((err = avcodec_open2(audioCodecCtx, codec, NULL)) < 0) { std::cerr << "(audio) avcodec_open2 failed " << err << std::endl; std::exit(-1); } swrCtx = swr_alloc(); if (!swrCtx) { std::cerr << "Failed to allocate swr context" << std::endl; std::exit(-1); } av_opt_set_int(swrCtx, "in_sample_rate", params.sample_rate, 0); av_opt_set_int(swrCtx, "out_sample_rate", audioCodecCtx->sample_rate, 0); av_opt_set_sample_fmt(swrCtx, "in_sample_fmt", AV_SAMPLE_FMT_FLT, 0); av_opt_set_sample_fmt(swrCtx, "out_sample_fmt", audioCodecCtx->sample_fmt, 0); #if HAVE_CH_LAYOUT AVChannelLayout in_chlayout = AV_CHANNEL_LAYOUT_STEREO; av_opt_set_chlayout(swrCtx, "in_chlayout", &in_chlayout, 0); av_opt_set_chlayout(swrCtx, "out_chlayout", &audioCodecCtx->ch_layout, 0); #else av_opt_set_channel_layout(swrCtx, "in_channel_layout", AV_CH_LAYOUT_STEREO, 0); av_opt_set_channel_layout(swrCtx, "out_channel_layout", audioCodecCtx->channel_layout, 0); #endif if (swr_init(swrCtx)) { std::cerr << "Failed to initialize swr" << std::endl; std::exit(-1); } int ret; if ((ret = avcodec_parameters_from_context(audioStream->codecpar, audioCodecCtx)) < 0) { char errmsg[256]; av_strerror(ret, errmsg, sizeof(errmsg)); std::cerr << "avcodec_parameters_from_context failed: " << err << std::endl; std::exit(-1); } } #endif void FrameWriter::init_codecs() { init_video_stream(); #ifdef HAVE_AUDIO if (params.enable_audio) init_audio_stream(); #endif av_dump_format(fmtCtx, 0, params.file.c_str(), 1); if (avio_open(&fmtCtx->pb, params.file.c_str(), AVIO_FLAG_WRITE)) { std::cerr << "avio_open failed" << std::endl; std::exit(-1); } AVDictionary *dummy = NULL; char err[256]; int ret; if ((ret = avformat_write_header(fmtCtx, &dummy)) != 0) { std::cerr << "Failed to write file header" << std::endl; av_strerror(ret, err, 256); std::cerr << err << std::endl; std::exit(-1); } av_dict_free(&dummy); } static const char* determine_output_format(const FrameWriterParams& params) { if (!params.muxer.empty()) return params.muxer.c_str(); if (params.file.find("rtmp") == 0) return "flv"; if (params.file.find("udp") == 0) return "mpegts"; return NULL; } FrameWriter::FrameWriter(const FrameWriterParams& _params) : params(_params) { if (params.enable_ffmpeg_debug_output) av_log_set_level(AV_LOG_DEBUG); #ifdef HAVE_LIBAVDEVICE avdevice_register_all(); #endif // Preparing the data concerning the format and codec, // in order to write properly the header, frame data and end of file. this->outputFmt = av_guess_format(NULL, params.file.c_str(), NULL); auto streamFormat = determine_output_format(params); auto context_ret = avformat_alloc_output_context2(&this->fmtCtx, NULL, streamFormat, params.file.c_str()); if (context_ret < 0) { std::cerr << "Failed to allocate output context" << std::endl; std::exit(-1); } init_codecs(); } void FrameWriter::encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt) { /* send the frame to the encoder */ int ret = avcodec_send_frame(enc_ctx, frame); if (ret < 0) { fprintf(stderr, "error sending a frame for encoding\n"); return; } while (ret >= 0) { ret = avcodec_receive_packet(enc_ctx, pkt); if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { return; } if (ret < 0) { fprintf(stderr, "error during encoding\n"); return; } finish_frame(enc_ctx, *pkt); } } bool FrameWriter::push_frame(AVFrame *frame, int64_t usec) { frame->pts = usec; // We use time_base = 1/US_RATE // Push the RGB frame into the filtergraph */ int err = av_buffersrc_add_frame_flags(videoFilterSourceCtx, frame, 0); if (err < 0) { std::cerr << "Error while feeding the filtergraph!" << std::endl; return false; } // Pull filtered frames from the filtergraph while (true) { AVFrame *filtered_frame = av_frame_alloc(); if (!filtered_frame) { std::cerr << "Error av_frame_alloc" << std::endl; return false; } err = av_buffersink_get_frame(videoFilterSinkCtx, filtered_frame); if (err == AVERROR(EAGAIN)) { // Not an error. No frame available. // Try again later. av_frame_free(&filtered_frame); break; } else if (err == AVERROR_EOF) { // There will be no more output frames on this sink. // That could happen if a filter like 'trim' is used to // stop after a given time. return false; } else if (err < 0) { av_frame_free(&filtered_frame); return false; } filtered_frame->pict_type = AV_PICTURE_TYPE_NONE; // So we have a frame. Encode it! AVPacket *pkt = av_packet_alloc(); pkt->data = NULL; pkt->size = 0; encode(videoCodecCtx, filtered_frame, pkt); av_frame_free(&filtered_frame); av_packet_free(&pkt); } av_frame_free(&frame); return true; } bool FrameWriter::add_frame(const uint8_t* pixels, int64_t usec, bool y_invert) { /* Calculate data after y-inversion */ int stride[] = {int(params.stride)}; const uint8_t *formatted_pixels = pixels; if (y_invert) { formatted_pixels += stride[0] * (params.height - 1); stride[0] *= -1; } auto frame = av_frame_alloc(); if (!frame) { std::cerr << "Failed to allocate frame!" << std::endl; return false; } frame->data[0] = (uint8_t*)formatted_pixels; frame->linesize[0] = stride[0]; frame->format = get_input_format(); frame->width = params.width; frame->height = params.height; return push_frame(frame, usec); } bool FrameWriter::add_frame(struct gbm_bo *bo, int64_t usec, bool y_invert) { if (y_invert) { std::cerr << "Y_INVERT not supported with dmabuf" << std::endl; return false; } auto frame = av_frame_alloc(); if (!frame) { std::cerr << "Failed to allocate frame!" << std::endl; return false; } if (mapped_frames.find(bo) == mapped_frames.end()) { auto vaapi_frame = av_frame_alloc(); if (!vaapi_frame) { std::cerr << "Failed to allocate frame!" << std::endl; return false; } AVDRMFrameDescriptor *desc = (AVDRMFrameDescriptor*) av_mallocz(sizeof(AVDRMFrameDescriptor)); desc->nb_layers = 1; desc->nb_objects = 1; desc->objects[0].fd = gbm_bo_get_fd(bo); desc->objects[0].format_modifier = gbm_bo_get_modifier(bo); desc->objects[0].size = gbm_bo_get_stride(bo) * gbm_bo_get_height(bo); desc->layers[0].format = gbm_bo_get_format(bo); desc->layers[0].nb_planes = gbm_bo_get_plane_count(bo); for (int i = 0; i < gbm_bo_get_plane_count(bo); ++i) { desc->layers[0].planes[i].object_index = 0; desc->layers[0].planes[i].pitch = gbm_bo_get_stride_for_plane(bo, i); desc->layers[0].planes[i].offset = gbm_bo_get_offset(bo, i); } frame->width = gbm_bo_get_width(bo); frame->height = gbm_bo_get_height(bo); frame->format = AV_PIX_FMT_DRM_PRIME; frame->data[0] = reinterpret_cast<uint8_t*>(desc); frame->buf[0] = av_buffer_create(frame->data[0], sizeof(*desc), [](void *, uint8_t *data) { av_free(data); }, frame, 0); vaapi_frame->format = AV_PIX_FMT_VAAPI; vaapi_frame->hw_frames_ctx = av_buffer_ref(this->hw_frame_context_in); int ret = av_hwframe_map(vaapi_frame, frame, AV_HWFRAME_MAP_READ); av_frame_unref(frame); if (ret < 0) { std::cerr << "Failed to map vaapi frame " << averr(ret) << std::endl; return false; } mapped_frames[bo] = vaapi_frame; } av_frame_ref(frame, mapped_frames[bo]); return push_frame(frame, usec); } #ifdef HAVE_AUDIO #define SRC_RATE 1e6 #define DST_RATE 1e3 static int64_t conv_audio_pts(SwrContext *ctx, int64_t in, int sample_rate) { //int64_t d = (int64_t) AUDIO_RATE * AUDIO_RATE; int64_t d = (int64_t) sample_rate * sample_rate; /* Convert from audio_src_tb to 1/(src_samplerate * dst_samplerate) */ in = av_rescale_rnd(in, d, SRC_RATE, AV_ROUND_NEAR_INF); /* In units of 1/(src_samplerate * dst_samplerate) */ in = swr_next_pts(ctx, in); /* Convert from 1/(src_samplerate * dst_samplerate) to audio_dst_tb */ return av_rescale_rnd(in, DST_RATE, d, AV_ROUND_NEAR_INF); } void FrameWriter::send_audio_pkt(AVFrame *frame) { AVPacket *pkt = av_packet_alloc(); pkt->data = NULL; pkt->size = 0; encode(audioCodecCtx, frame, pkt); av_packet_free(&pkt); } size_t FrameWriter::get_audio_buffer_size() { return audioCodecCtx->frame_size << 3; } void FrameWriter::add_audio(const void* buffer) { AVFrame *inputf = av_frame_alloc(); inputf->sample_rate = params.sample_rate; inputf->format = AV_SAMPLE_FMT_FLT; #if HAVE_CH_LAYOUT inputf->ch_layout = (AVChannelLayout) AV_CHANNEL_LAYOUT_STEREO; #else inputf->channel_layout = AV_CH_LAYOUT_STEREO; #endif inputf->nb_samples = audioCodecCtx->frame_size; av_frame_get_buffer(inputf, 0); memcpy(inputf->data[0], buffer, get_audio_buffer_size()); AVFrame *outputf = av_frame_alloc(); outputf->format = audioCodecCtx->sample_fmt; outputf->sample_rate = audioCodecCtx->sample_rate; #if HAVE_CH_LAYOUT av_channel_layout_copy(&outputf->ch_layout, &audioCodecCtx->ch_layout); #else outputf->channel_layout = audioCodecCtx->channel_layout; #endif outputf->nb_samples = audioCodecCtx->frame_size; av_frame_get_buffer(outputf, 0); outputf->pts = conv_audio_pts(swrCtx, INT64_MIN, params.sample_rate); swr_convert_frame(swrCtx, outputf, inputf); send_audio_pkt(outputf); av_frame_free(&inputf); av_frame_free(&outputf); } #endif void FrameWriter::finish_frame(AVCodecContext *enc_ctx, AVPacket& pkt) { static std::mutex fmt_mutex, pending_mutex; if (enc_ctx == videoCodecCtx) { av_packet_rescale_ts(&pkt, videoCodecCtx->time_base, videoStream->time_base); pkt.stream_index = videoStream->index; } #ifdef HAVE_AUDIO else { av_packet_rescale_ts(&pkt, (AVRational){ 1, 1000 }, audioStream->time_base); pkt.stream_index = audioStream->index; } /* We use two locks to ensure that if WLOG the audio thread is waiting for * the video one, when the video becomes ready the audio thread will be the * next one to obtain the lock */ if (params.enable_audio) { pending_mutex.lock(); fmt_mutex.lock(); pending_mutex.unlock(); } #endif if (av_interleaved_write_frame(fmtCtx, &pkt) != 0) { params.write_aborted_flag = true; } av_packet_unref(&pkt); #ifdef HAVE_AUDIO if (params.enable_audio) fmt_mutex.unlock(); #endif } FrameWriter::~FrameWriter() { // Writing the delayed frames: AVPacket *pkt = av_packet_alloc(); encode(videoCodecCtx, NULL, pkt); #ifdef HAVE_AUDIO if (params.enable_audio) { encode(audioCodecCtx, NULL, pkt); } #endif // Writing the end of the file. av_write_trailer(fmtCtx); // Closing the file. if (outputFmt && (!(outputFmt->flags & AVFMT_NOFILE))) avio_closep(&fmtCtx->pb); // Freeing all the allocated memory: avcodec_free_context(&videoCodecCtx); #ifdef HAVE_AUDIO if (params.enable_audio) avcodec_free_context(&audioCodecCtx); #endif av_packet_free(&pkt); // TODO: free all the hw accel avformat_free_context(fmtCtx); } 07070103197DA9000081A40000000000000000000000016705807700000F41000000000000003400000000000000000000002C00000000wf-recorder-0.5.0+git1/src/frame-writer.hpp// Adapted from https://stackoverflow.com/questions/34511312/how-to-encode-a-video-from-several-images-generated-in-a-c-program-without-wri // (Later) adapted from https://github.com/apc-llc/moviemaker-cpp #ifndef FRAME_WRITER #define FRAME_WRITER #include <stdint.h> #include <string> #include <vector> #include <map> #include <atomic> #include "config.h" extern "C" { #include <libswresample/swresample.h> #include <libavcodec/avcodec.h> #ifdef HAVE_LIBAVDEVICE #include <libavdevice/avdevice.h> #endif #include <libavutil/mathematics.h> #include <libavformat/avformat.h> #include <libavfilter/avfilter.h> #include <libavfilter/buffersink.h> #include <libavfilter/buffersrc.h> #include <libavutil/pixdesc.h> #include <libavutil/hwcontext.h> #include <libavutil/opt.h> #include <libavutil/hwcontext_drm.h> } #include "config.h" enum InputFormat { INPUT_FORMAT_BGR0, INPUT_FORMAT_RGB0, INPUT_FORMAT_BGR8, INPUT_FORMAT_RGB565, INPUT_FORMAT_BGR565, INPUT_FORMAT_X2RGB10, INPUT_FORMAT_X2BGR10, INPUT_FORMAT_RGBX64, INPUT_FORMAT_BGRX64, INPUT_FORMAT_RGBX64F, INPUT_FORMAT_DMABUF, }; struct FrameWriterParams { std::string file; int width; int height; int stride; InputFormat format; int drm_format; std::string video_filter = "null"; // dummy filter std::string codec; std::string audio_codec; std::string muxer; std::string pix_fmt; std::string sample_fmt; std::string hw_device; // used only if codec contains vaapi std::map<std::string, std::string> codec_options; std::map<std::string, std::string> audio_codec_options; int framerate = 0; int sample_rate; int buffrate = 0; int64_t audio_sync_offset; bool enable_audio; bool enable_ffmpeg_debug_output; int bframes; std::atomic<bool>& write_aborted_flag; FrameWriterParams(std::atomic<bool>& flag): write_aborted_flag(flag) {} }; class FrameWriter { FrameWriterParams params; void load_codec_options(AVDictionary **dict); void load_audio_codec_options(AVDictionary **dict); const AVOutputFormat* outputFmt; AVStream* videoStream; AVCodecContext* videoCodecCtx; AVFormatContext* fmtCtx; AVFilterContext* videoFilterSourceCtx = NULL; AVFilterContext* videoFilterSinkCtx = NULL; AVFilterGraph* videoFilterGraph = NULL; AVBufferRef *hw_device_context = NULL; AVBufferRef *hw_frame_context = NULL; AVBufferRef *hw_frame_context_in = NULL; std::map<struct gbm_bo*, AVFrame*> mapped_frames; AVPixelFormat lookup_pixel_format(std::string pix_fmt); AVPixelFormat handle_buffersink_pix_fmt(const AVCodec *codec); AVPixelFormat get_input_format(); void init_hw_accel(); void init_codecs(); void init_video_filters(const AVCodec *codec); void init_video_stream(); void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt); #ifdef HAVE_AUDIO SwrContext *swrCtx; AVStream *audioStream; AVCodecContext *audioCodecCtx; void init_swr(); void init_audio_stream(); void send_audio_pkt(AVFrame *frame); #endif void finish_frame(AVCodecContext *enc_ctx, AVPacket& pkt); bool push_frame(AVFrame *frame, int64_t usec); public: FrameWriter(const FrameWriterParams& params); bool add_frame(const uint8_t* pixels, int64_t usec, bool y_invert); bool add_frame(struct gbm_bo *bo, int64_t usec, bool y_invert); #ifdef HAVE_AUDIO /* Buffer must have size get_audio_buffer_size() */ void add_audio(const void* buffer); size_t get_audio_buffer_size(); #endif ~FrameWriter(); }; #include <memory> #include <mutex> #include <atomic> extern std::mutex frame_writer_mutex, frame_writer_pending_mutex; extern std::unique_ptr<FrameWriter> frame_writer; extern std::atomic<bool> exit_main_loop; #endif // FRAME_WRITER 07070103197DAA000081A4000000000000000000000001670580770000A1B7000000000000003400000000000000000000002400000000wf-recorder-0.5.0+git1/src/main.cpp#define _XOPEN_SOURCE 700 #define _POSIX_C_SOURCE 199309L #include <iostream> #include <optional> #include <list> #include <string> #include <thread> #include <mutex> #include <atomic> #include <getopt.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/stat.h> #include <signal.h> #include <unistd.h> #include <wayland-client-protocol.h> #include <gbm.h> #include <fcntl.h> #include <xf86drm.h> #include "frame-writer.hpp" #include "buffer-pool.hpp" #include "wlr-screencopy-unstable-v1-client-protocol.h" #include "xdg-output-unstable-v1-client-protocol.h" #include "linux-dmabuf-unstable-v1-client-protocol.h" #include "config.h" #ifdef HAVE_AUDIO #include "audio.hpp" AudioReaderParams audioParams; #endif #define MAX_FRAME_FAILURES 16 static const int GRACEFUL_TERMINATION_SIGNALS[] = { SIGTERM, SIGINT, SIGHUP }; std::mutex frame_writer_mutex, frame_writer_pending_mutex; std::unique_ptr<FrameWriter> frame_writer; static int drm_fd = -1; static struct gbm_device *gbm_device = NULL; static std::string drm_device_name; static struct wl_shm *shm = NULL; static struct zxdg_output_manager_v1 *xdg_output_manager = NULL; static struct zwlr_screencopy_manager_v1 *screencopy_manager = NULL; static struct zwp_linux_dmabuf_v1 *dmabuf = NULL; void request_next_frame(); struct wf_recorder_output { wl_output *output; zxdg_output_v1 *zxdg_output; std::string name, description; int32_t x, y, width, height; }; std::list<wf_recorder_output> available_outputs; static void handle_xdg_output_logical_position(void*, zxdg_output_v1* zxdg_output, int32_t x, int32_t y) { for (auto& wo : available_outputs) { if (wo.zxdg_output == zxdg_output) { wo.x = x; wo.y = y; } } } static void handle_xdg_output_logical_size(void*, zxdg_output_v1* zxdg_output, int32_t w, int32_t h) { for (auto& wo : available_outputs) { if (wo.zxdg_output == zxdg_output) { wo.width = w; wo.height = h; } } } static void handle_xdg_output_done(void*, zxdg_output_v1*) { } static void handle_xdg_output_name(void*, zxdg_output_v1 *zxdg_output_v1, const char *name) { for (auto& wo : available_outputs) { if (wo.zxdg_output == zxdg_output_v1) wo.name = name; } } static void handle_xdg_output_description(void*, zxdg_output_v1 *zxdg_output_v1, const char *description) { for (auto& wo : available_outputs) { if (wo.zxdg_output == zxdg_output_v1) wo.description = description; } } const zxdg_output_v1_listener xdg_output_implementation = { .logical_position = handle_xdg_output_logical_position, .logical_size = handle_xdg_output_logical_size, .done = handle_xdg_output_done, .name = handle_xdg_output_name, .description = handle_xdg_output_description }; struct wf_buffer : public buffer_pool_buf { struct gbm_bo *bo = nullptr; zwp_linux_buffer_params_v1 *params = nullptr; struct wl_buffer *wl_buffer = nullptr; void *data = nullptr; size_t size = 0; enum wl_shm_format format; int drm_format; int width, height, stride; bool y_invert; timespec presented; uint64_t base_usec; }; std::atomic<bool> exit_main_loop{false}; buffer_pool<wf_buffer, 16> buffers; bool buffer_copy_done = false; static int backingfile(off_t size) { char name[] = "/tmp/wf-recorder-shared-XXXXXX"; int fd = mkstemp(name); if (fd < 0) { return -1; } int ret; while ((ret = ftruncate(fd, size)) == EINTR) { // No-op } if (ret < 0) { close(fd); return -1; } unlink(name); return fd; } static struct wl_buffer *create_shm_buffer(uint32_t fmt, int width, int height, int stride, void **data_out) { int size = stride * height; int fd = backingfile(size); if (fd < 0) { fprintf(stderr, "creating a buffer file for %d B failed: %m\n", size); return NULL; } void *data = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if (data == MAP_FAILED) { fprintf(stderr, "mmap failed: %m\n"); close(fd); return NULL; } struct wl_shm_pool *pool = wl_shm_create_pool(shm, fd, size); close(fd); struct wl_buffer *buffer = wl_shm_pool_create_buffer(pool, 0, width, height, stride, fmt); wl_shm_pool_destroy(pool); *data_out = data; return buffer; } void free_shm_buffer(wf_buffer& buffer) { if (buffer.wl_buffer == NULL) { return; } munmap(buffer.data, buffer.size); wl_buffer_destroy(buffer.wl_buffer); buffer.wl_buffer = NULL; } static bool use_damage = true; static bool use_dmabuf = false; static bool use_hwupload = false; static uint32_t wl_shm_to_drm_format(uint32_t format) { if (format == WL_SHM_FORMAT_ARGB8888) { return GBM_FORMAT_ARGB8888; } else if (format == WL_SHM_FORMAT_XRGB8888) { return GBM_FORMAT_XRGB8888; } else { return format; } } static void frame_handle_buffer(void *, struct zwlr_screencopy_frame_v1 *frame, uint32_t format, uint32_t width, uint32_t height, uint32_t stride) { if (use_dmabuf) { return; } auto& buffer = buffers.capture(); auto old_format = buffer.format; buffer.format = (wl_shm_format)format; buffer.drm_format = wl_shm_to_drm_format(format); buffer.width = width; buffer.height = height; buffer.stride = stride; /* ffmpeg requires even width and height */ if (buffer.width % 2) buffer.width -= 1; if (buffer.height % 2) buffer.height -= 1; if (!buffer.wl_buffer || old_format != format) { free_shm_buffer(buffer); buffer.wl_buffer = create_shm_buffer(format, width, height, stride, &buffer.data); } if (buffer.wl_buffer == NULL) { fprintf(stderr, "failed to create buffer\n"); exit(EXIT_FAILURE); } if (use_damage) { zwlr_screencopy_frame_v1_copy_with_damage(frame, buffer.wl_buffer); } else { zwlr_screencopy_frame_v1_copy(frame, buffer.wl_buffer); } } static void frame_handle_flags(void*, struct zwlr_screencopy_frame_v1 *, uint32_t flags) { buffers.capture().y_invert = flags & ZWLR_SCREENCOPY_FRAME_V1_FLAGS_Y_INVERT; } int32_t frame_failed_cnt = 0; static void frame_handle_ready(void *, struct zwlr_screencopy_frame_v1 *, uint32_t tv_sec_hi, uint32_t tv_sec_low, uint32_t tv_nsec) { auto& buffer = buffers.capture(); buffer_copy_done = true; buffer.presented.tv_sec = ((1ll * tv_sec_hi) << 32ll) | tv_sec_low; buffer.presented.tv_nsec = tv_nsec; frame_failed_cnt = 0; } static void frame_handle_failed(void *, struct zwlr_screencopy_frame_v1 *) { std::cerr << "Failed to copy frame, retrying..." << std::endl; ++frame_failed_cnt; request_next_frame(); if (frame_failed_cnt > MAX_FRAME_FAILURES) { std::cerr << "Failed to copy frame too many times, exiting!" << std::endl; exit_main_loop = true; } } static void frame_handle_damage(void *, struct zwlr_screencopy_frame_v1 *, uint32_t, uint32_t, uint32_t, uint32_t) { } static void dmabuf_created(void *data, struct zwp_linux_buffer_params_v1 *, struct wl_buffer *wl_buffer) { auto& buffer = buffers.capture(); buffer.wl_buffer = wl_buffer; zwlr_screencopy_frame_v1 *frame = (zwlr_screencopy_frame_v1*) data; if (use_damage) { zwlr_screencopy_frame_v1_copy_with_damage(frame, buffer.wl_buffer); } else { zwlr_screencopy_frame_v1_copy(frame, buffer.wl_buffer); } } static void dmabuf_failed(void *, struct zwp_linux_buffer_params_v1 *) { std::cerr << "Failed to create dmabuf" << std::endl; exit_main_loop = true; } static const struct zwp_linux_buffer_params_v1_listener params_listener = { .created = dmabuf_created, .failed = dmabuf_failed, }; static wl_shm_format drm_to_wl_shm_format(uint32_t format) { if (format == GBM_FORMAT_ARGB8888) { return WL_SHM_FORMAT_ARGB8888; } else if (format == GBM_FORMAT_XRGB8888) { return WL_SHM_FORMAT_XRGB8888; } else { return (wl_shm_format)format; } } static void frame_handle_linux_dmabuf(void *, struct zwlr_screencopy_frame_v1 *frame, uint32_t format, uint32_t width, uint32_t height) { if (!use_dmabuf) { return; } auto& buffer = buffers.capture(); auto old_format = buffer.format; buffer.format = drm_to_wl_shm_format(format); buffer.drm_format = format; buffer.width = width; buffer.height = height; if (!buffer.wl_buffer || (old_format != buffer.format)) { if (buffer.bo) { if (buffer.wl_buffer) { wl_buffer_destroy(buffer.wl_buffer); } zwp_linux_buffer_params_v1_destroy(buffer.params); gbm_bo_destroy(buffer.bo); } const uint64_t modifier = 0; // DRM_FORMAT_MOD_LINEAR buffer.bo = gbm_bo_create_with_modifiers(gbm_device, buffer.width, buffer.height, format, &modifier, 1); if (buffer.bo == NULL) { buffer.bo = gbm_bo_create(gbm_device, buffer.width, buffer.height, format, GBM_BO_USE_LINEAR | GBM_BO_USE_RENDERING); } if (buffer.bo == NULL) { std::cerr << "Failed to create gbm bo" << std::endl; exit_main_loop = true; return; } buffer.stride = gbm_bo_get_stride(buffer.bo); buffer.params = zwp_linux_dmabuf_v1_create_params(dmabuf); uint64_t mod = gbm_bo_get_modifier(buffer.bo); zwp_linux_buffer_params_v1_add(buffer.params, gbm_bo_get_fd(buffer.bo), 0, gbm_bo_get_offset(buffer.bo, 0), gbm_bo_get_stride(buffer.bo), mod >> 32, mod & 0xffffffff); zwp_linux_buffer_params_v1_add_listener(buffer.params, ¶ms_listener, frame); zwp_linux_buffer_params_v1_create(buffer.params, buffer.width, buffer.height, format, 0); } else { if (use_damage) { zwlr_screencopy_frame_v1_copy_with_damage(frame, buffer.wl_buffer); } else { zwlr_screencopy_frame_v1_copy(frame, buffer.wl_buffer); } } } static void frame_handle_buffer_done(void *, struct zwlr_screencopy_frame_v1 *) { } static const struct zwlr_screencopy_frame_v1_listener frame_listener = { .buffer = frame_handle_buffer, .flags = frame_handle_flags, .ready = frame_handle_ready, .failed = frame_handle_failed, .damage = frame_handle_damage, .linux_dmabuf = frame_handle_linux_dmabuf, .buffer_done = frame_handle_buffer_done, }; static void dmabuf_feedback_done(void *, struct zwp_linux_dmabuf_feedback_v1 *feedback) { zwp_linux_dmabuf_feedback_v1_destroy(feedback); } static void dmabuf_feedback_format_table(void *, struct zwp_linux_dmabuf_feedback_v1 *, int32_t fd, uint32_t) { close(fd); } static void dmabuf_feedback_main_device(void *, struct zwp_linux_dmabuf_feedback_v1 *, struct wl_array *device) { dev_t dev_id; memcpy(&dev_id, device->data, device->size); drmDevice *dev = NULL; if (drmGetDeviceFromDevId(dev_id, 0, &dev) != 0) { std::cerr << "Failed to get DRM device from dev id " << strerror(errno) << std::endl; return; } if (dev->available_nodes & (1 << DRM_NODE_RENDER)) { drm_device_name = dev->nodes[DRM_NODE_RENDER]; } else if (dev->available_nodes & (1 << DRM_NODE_PRIMARY)) { drm_device_name = dev->nodes[DRM_NODE_PRIMARY]; } drmFreeDevice(&dev); } static void dmabuf_feedback_tranche_done(void *, struct zwp_linux_dmabuf_feedback_v1 *) { } static void dmabuf_feedback_tranche_target_device(void *, struct zwp_linux_dmabuf_feedback_v1 *, struct wl_array *) { } static void dmabuf_feedback_tranche_formats(void *, struct zwp_linux_dmabuf_feedback_v1 *, struct wl_array *) { } static void dmabuf_feedback_tranche_flags(void *, struct zwp_linux_dmabuf_feedback_v1 *, uint32_t) { } static const struct zwp_linux_dmabuf_feedback_v1_listener dmabuf_feedback_listener = { .done = dmabuf_feedback_done, .format_table = dmabuf_feedback_format_table, .main_device = dmabuf_feedback_main_device, .tranche_done = dmabuf_feedback_tranche_done, .tranche_target_device = dmabuf_feedback_tranche_target_device, .tranche_formats = dmabuf_feedback_tranche_formats, .tranche_flags = dmabuf_feedback_tranche_flags, }; static void handle_global(void*, struct wl_registry *registry, uint32_t name, const char *interface, uint32_t) { if (strcmp(interface, wl_output_interface.name) == 0) { auto output = (wl_output*)wl_registry_bind(registry, name, &wl_output_interface, 1); wf_recorder_output wro; wro.output = output; available_outputs.push_back(wro); } else if (strcmp(interface, wl_shm_interface.name) == 0) { shm = (wl_shm*) wl_registry_bind(registry, name, &wl_shm_interface, 1); } else if (strcmp(interface, zwlr_screencopy_manager_v1_interface.name) == 0) { screencopy_manager = (zwlr_screencopy_manager_v1*) wl_registry_bind(registry, name, &zwlr_screencopy_manager_v1_interface, 3); } else if (strcmp(interface, zxdg_output_manager_v1_interface.name) == 0) { xdg_output_manager = (zxdg_output_manager_v1*) wl_registry_bind(registry, name, &zxdg_output_manager_v1_interface, 2); // version 2 for name & description, if available } else if (strcmp(interface, zwp_linux_dmabuf_v1_interface.name) == 0) { dmabuf = (zwp_linux_dmabuf_v1*) wl_registry_bind(registry, name, &zwp_linux_dmabuf_v1_interface, 4); if (dmabuf) { struct zwp_linux_dmabuf_feedback_v1 *feedback = zwp_linux_dmabuf_v1_get_default_feedback(dmabuf); zwp_linux_dmabuf_feedback_v1_add_listener(feedback, &dmabuf_feedback_listener, NULL); } } } static void handle_global_remove(void*, struct wl_registry *, uint32_t) { // Who cares? } static const struct wl_registry_listener registry_listener = { .global = handle_global, .global_remove = handle_global_remove, }; static uint64_t timespec_to_usec (const timespec& ts) { return ts.tv_sec * 1000000ll + 1ll * ts.tv_nsec / 1000ll; } static InputFormat get_input_format(wf_buffer& buffer) { if (use_dmabuf && !use_hwupload) { return INPUT_FORMAT_DMABUF; } switch (buffer.format) { case WL_SHM_FORMAT_ARGB8888: case WL_SHM_FORMAT_XRGB8888: return INPUT_FORMAT_BGR0; case WL_SHM_FORMAT_XBGR8888: case WL_SHM_FORMAT_ABGR8888: return INPUT_FORMAT_RGB0; case WL_SHM_FORMAT_BGR888: return INPUT_FORMAT_BGR8; case WL_SHM_FORMAT_RGB565: return INPUT_FORMAT_RGB565; case WL_SHM_FORMAT_BGR565: return INPUT_FORMAT_BGR565; case WL_SHM_FORMAT_ARGB2101010: case WL_SHM_FORMAT_XRGB2101010: return INPUT_FORMAT_X2RGB10; case WL_SHM_FORMAT_ABGR2101010: case WL_SHM_FORMAT_XBGR2101010: return INPUT_FORMAT_X2BGR10; case WL_SHM_FORMAT_ABGR16161616: case WL_SHM_FORMAT_XBGR16161616: return INPUT_FORMAT_RGBX64; case WL_SHM_FORMAT_ARGB16161616: case WL_SHM_FORMAT_XRGB16161616: return INPUT_FORMAT_BGRX64; case WL_SHM_FORMAT_ABGR16161616F: case WL_SHM_FORMAT_XBGR16161616F: return INPUT_FORMAT_RGBX64F; default: fprintf(stderr, "Unsupported buffer format %d, exiting.", buffer.format); std::exit(0); } } static void write_loop(FrameWriterParams params) { /* Ignore SIGTERM/SIGINT/SIGHUP, main loop is responsible for the exit_main_loop signal */ sigset_t sigset; sigemptyset(&sigset); for (auto signo : GRACEFUL_TERMINATION_SIGNALS) { sigaddset(&sigset, signo); } pthread_sigmask(SIG_BLOCK, &sigset, NULL); #ifdef HAVE_AUDIO std::unique_ptr<AudioReader> pr; #endif std::optional<uint64_t> first_frame_ts; while(!exit_main_loop) { // wait for frame to become available while(buffers.encode().ready_encode() != true && !exit_main_loop) { std::this_thread::sleep_for(std::chrono::microseconds(1000)); } if (exit_main_loop) { break; } auto& buffer = buffers.encode(); frame_writer_pending_mutex.lock(); frame_writer_mutex.lock(); frame_writer_pending_mutex.unlock(); if (!frame_writer) { /* This is the first time buffer attributes are available */ params.format = get_input_format(buffer); params.drm_format = buffer.drm_format; params.width = buffer.width; params.height = buffer.height; params.stride = buffer.stride; frame_writer = std::unique_ptr<FrameWriter> (new FrameWriter(params)); #ifdef HAVE_AUDIO if (params.enable_audio) { audioParams.audio_frame_size = frame_writer->get_audio_buffer_size(); audioParams.sample_rate = params.sample_rate; pr = std::unique_ptr<AudioReader> (AudioReader::create(audioParams)); if (pr) { pr->start(); } } #endif } bool drop = false; uint64_t sync_timestamp = 0; if (first_frame_ts.has_value()) { sync_timestamp = buffer.base_usec - first_frame_ts.value(); } else if (pr) { if (!pr->get_time_base() || pr->get_time_base() > buffer.base_usec) { drop = true; } else { first_frame_ts = pr->get_time_base(); sync_timestamp = buffer.base_usec - first_frame_ts.value(); } } else { sync_timestamp = 0; first_frame_ts = buffer.base_usec; } bool do_cont = false; if (!drop) { if (use_dmabuf) { if (use_hwupload) { uint32_t stride = 0; void *map_data = NULL; void *data = gbm_bo_map(buffer.bo, 0, 0, buffer.width, buffer.height, GBM_BO_TRANSFER_READ, &stride, &map_data); if (!data) { std::cerr << "Failed to map bo" << std::endl; break; } do_cont = frame_writer->add_frame((unsigned char*)data, sync_timestamp, buffer.y_invert); gbm_bo_unmap(buffer.bo, map_data); } else { do_cont = frame_writer->add_frame(buffer.bo, sync_timestamp, buffer.y_invert); } } else { do_cont = frame_writer->add_frame((unsigned char*)buffer.data, sync_timestamp, buffer.y_invert); } } else { do_cont = true; } frame_writer_mutex.unlock(); if (!do_cont) { break; } buffers.next_encode(); } std::lock_guard<std::mutex> lock(frame_writer_mutex); /* Free the AudioReader connection first. This way it'd flush any remaining * frames to the FrameWriter */ #ifdef HAVE_AUDIO pr = nullptr; #endif frame_writer = nullptr; } void handle_graceful_termination(int) { exit_main_loop = true; } static bool user_specified_overwrite(std::string filename) { struct stat buffer; if (stat (filename.c_str(), &buffer) == 0 && !S_ISCHR(buffer.st_mode)) { std::string input; std::cerr << "Output file \"" << filename << "\" exists. Overwrite? Y/n: "; std::getline(std::cin, input); if (input.size() && input[0] != 'Y' && input[0] != 'y') { std::cerr << "Use -f to specify the file name." << std::endl; return false; } } return true; } static void check_has_protos() { if (shm == NULL) { fprintf(stderr, "compositor is missing wl_shm\n"); exit(EXIT_FAILURE); } if (screencopy_manager == NULL) { fprintf(stderr, "compositor doesn't support wlr-screencopy-unstable-v1\n"); exit(EXIT_FAILURE); } if (xdg_output_manager == NULL) { fprintf(stderr, "compositor doesn't support xdg-output-unstable-v1\n"); exit(EXIT_FAILURE); } if (use_dmabuf && dmabuf == NULL) { fprintf(stderr, "compositor doesn't support linux-dmabuf-unstable-v1\n"); exit(EXIT_FAILURE); } if (available_outputs.empty()) { fprintf(stderr, "no outputs available\n"); exit(EXIT_FAILURE); } } wl_display *display = NULL; static void sync_wayland() { wl_display_dispatch(display); wl_display_roundtrip(display); } static void load_output_info() { for (auto& wo : available_outputs) { wo.zxdg_output = zxdg_output_manager_v1_get_xdg_output( xdg_output_manager, wo.output); zxdg_output_v1_add_listener(wo.zxdg_output, &xdg_output_implementation, NULL); } sync_wayland(); } static wf_recorder_output* choose_interactive() { fprintf(stdout, "Please select an output from the list to capture (enter output no.):\n"); int i = 1; for (auto& wo : available_outputs) { printf("%d. Name: %s Description: %s\n", i++, wo.name.c_str(), wo.description.c_str()); } printf("Enter output no.:"); fflush(stdout); int choice; if (scanf("%d", &choice) != 1 || choice > (int)available_outputs.size() || choice <= 0) return nullptr; auto it = available_outputs.begin(); std::advance(it, choice - 1); return &*it; } struct capture_region { int32_t x, y; int32_t width, height; capture_region() : capture_region(0, 0, 0, 0) {} capture_region(int32_t _x, int32_t _y, int32_t _width, int32_t _height) : x(_x), y(_y), width(_width), height(_height) { } void set_from_string(std::string geometry_string) { if (sscanf(geometry_string.c_str(), "%d,%d %dx%d", &x, &y, &width, &height) != 4) { fprintf(stderr, "Bad geometry: %s, capturing whole output instead.\n", geometry_string.c_str()); x = y = width = height = 0; return; } } bool is_selected() { return width > 0 && height > 0; } bool contained_in(const capture_region& output) const { return output.x <= x && output.x + output.width >= x + width && output.y <= y && output.y + output.height >= y + height; } }; static wf_recorder_output* detect_output_from_region(const capture_region& region) { for (auto& wo : available_outputs) { const capture_region output_region{wo.x, wo.y, wo.width, wo.height}; if (region.contained_in(output_region)) { std::cerr << "Detected output based on geometry: " << wo.name << std::endl; return &wo; } } std::cerr << "Failed to detect output based on geometry (is your geometry overlapping outputs?)" << std::endl; return nullptr; } static void help() { printf(R"(Usage: wf-recorder [OPTION]... -f [FILE]... Screen recording of wlroots-based compositors With no FILE, start recording the current screen. Use Ctrl+C to stop.)"); #ifdef HAVE_AUDIO printf(R"( -a, --audio[=DEVICE] Starts recording the screen with audio. [=DEVICE] argument is optional. In case you want to specify the audio device which will capture the audio, you can run this command with the name of that device. You can find your device by running: pactl list sources | grep Name Specify device like this: -a<device> or --audio=<device>)"); #endif printf(R"( -c, --codec Specifies the codec of the video. These can be found by using: ffmpeg -encoders To modify codec parameters, use -p <option_name>=<option_value> -r, --framerate Changes framerate to constant framerate with a given value. -d, --device Selects the device to use when encoding the video Some drivers report support for rgb0 data for vaapi input but really only support yuv. --no-dmabuf By default, wf-recorder will try to use only GPU buffers and copies if using a GPU encoder. However, this can cause issues on some systems. In such cases, this option will disable the GPU copy and force a CPU one. -D, --no-damage By default, wf-recorder will request a new frame from the compositor only when the screen updates. This results in a much smaller output file, which however has a variable refresh rate. When this option is on, wf-recorder does not use this optimization and continuously records new frames, even if there are no updates on the screen. -f <filename>.ext By using the -f option the output file will have the name : filename.ext and the file format will be determined by provided while extension .ext . If the extension .ext provided is not recognized by your FFmpeg muxers, the command will fail. You can check the muxers that your FFmpeg installation supports by running: ffmpeg -muxers -m, --muxer Set the output format to a specific muxer instead of detecting it from the filename. -x, --pixel-format Set the output pixel format. These can be found by running: ffmpeg -pix_fmts -g, --geometry Selects a specific part of the screen. The format is "x,y WxH". -h, --help Prints this help screen. -v, --version Prints the version of wf-recorder. -l, --log Generates a log on the current terminal. Debug purposes. -o, --output Specify the output where the video is to be recorded. -p, --codec-param Change the codec parameters. -p <option_name>=<option_value> -F, --filter Specify the ffmpeg filter string to use. For example, -F scale_vaapi=format=nv12 is used for VAAPI. -b, --bframes This option is used to set the maximum number of b-frames to be used. If b-frames are not supported by your hardware, set this to 0. -B. --buffrate This option is used to specify the buffers expected framerate. this may help when encoders are expecting specific or limited framerate. --audio-backend Specifies the audio backend among the available backends, for ex. --audio-backend=pipewire -C, --audio-codec Specifies the codec of the audio. These can be found by running: ffmpeg -encoders To modify codec parameters, use -P <option_name>=<option_value> -X, --sample-format Set the output audio sample format. These can be found by running: ffmpeg -sample_fmts -R, --sample-rate Changes the audio sample rate in HZ. The default value is 48000. -P, --audio-codec-param Change the audio codec parameters. -P <option_name>=<option_value> -y, --overwrite Force overwriting the output file without prompting. Examples:)"); #ifdef HAVE_AUDIO printf(R"( Video Only:)"); #endif printf(R"( - wf-recorder Records the video. Use Ctrl+C to stop recording. The video file will be stored as recording.mp4 in the current working directory. - wf-recorder -f <filename>.ext Records the video. Use Ctrl+C to stop recording. The video file will be stored as <filename>.ext in the current working directory.)"); #ifdef HAVE_AUDIO printf(R"( Video and Audio: - wf-recorder -a Records the video and audio. Use Ctrl+C to stop recording. The video file will be stored as recording.mp4 in the current working directory. - wf-recorder -a -f <filename>.ext Records the video and audio. Use Ctrl+C to stop recording. The video file will be stored as <filename>.ext in the current working directory.)"); #endif printf(R"( )" "\n"); exit(EXIT_SUCCESS); } capture_region selected_region{}; wf_recorder_output *chosen_output = nullptr; zwlr_screencopy_frame_v1 *frame = NULL; void request_next_frame() { if (frame != NULL) { zwlr_screencopy_frame_v1_destroy(frame); } /* Capture the whole output if the user hasn't provided a good geometry */ if (!selected_region.is_selected()) { frame = zwlr_screencopy_manager_v1_capture_output( screencopy_manager, 1, chosen_output->output); } else { frame = zwlr_screencopy_manager_v1_capture_output_region( screencopy_manager, 1, chosen_output->output, selected_region.x - chosen_output->x, selected_region.y - chosen_output->y, selected_region.width, selected_region.height); } zwlr_screencopy_frame_v1_add_listener(frame, &frame_listener, NULL); } static void parse_codec_opts(std::map<std::string, std::string>& options, const std::string param) { size_t pos; pos = param.find("="); if (pos != std::string::npos && pos != param.length() -1) { auto optname = param.substr(0, pos); auto optvalue = param.substr(pos + 1, param.length() - pos - 1); options.insert(std::pair<std::string, std::string>(optname, optvalue)); } else { std::cerr << "Invalid codec option " + param << std::endl; } } int main(int argc, char *argv[]) { FrameWriterParams params = FrameWriterParams(exit_main_loop); params.file = "recording." + std::string(DEFAULT_CONTAINER_FORMAT); params.codec = DEFAULT_CODEC; params.pix_fmt = DEFAULT_PIX_FMT; params.audio_codec = DEFAULT_AUDIO_CODEC; params.sample_rate = DEFAULT_AUDIO_SAMPLE_RATE; params.enable_ffmpeg_debug_output = false; params.enable_audio = false; params.bframes = -1; constexpr const char* default_cmdline_output = "interactive"; std::string cmdline_output = default_cmdline_output; bool force_no_dmabuf = false; bool force_overwrite = false; struct option opts[] = { { "output", required_argument, NULL, 'o' }, { "file", required_argument, NULL, 'f' }, { "muxer", required_argument, NULL, 'm' }, { "geometry", required_argument, NULL, 'g' }, { "codec", required_argument, NULL, 'c' }, { "codec-param", required_argument, NULL, 'p' }, { "framerate", required_argument, NULL, 'r' }, { "pixel-format", required_argument, NULL, 'x' }, { "audio-backend", required_argument, NULL, '*' }, { "audio-codec", required_argument, NULL, 'C' }, { "audio-codec-param", required_argument, NULL, 'P' }, { "sample-rate", required_argument, NULL, 'R' }, { "sample-format", required_argument, NULL, 'X' }, { "device", required_argument, NULL, 'd' }, { "no-dmabuf", no_argument, NULL, '&' }, { "filter", required_argument, NULL, 'F' }, { "log", no_argument, NULL, 'l' }, { "audio", optional_argument, NULL, 'a' }, { "help", no_argument, NULL, 'h' }, { "bframes", required_argument, NULL, 'b' }, { "buffrate", required_argument, NULL, 'B' }, { "version", no_argument, NULL, 'v' }, { "no-damage", no_argument, NULL, 'D' }, { "overwrite", no_argument, NULL, 'y' }, { 0, 0, NULL, 0 } }; int c, i; while((c = getopt_long(argc, argv, "o:f:m:g:c:p:r:x:C:P:R:X:d:b:B:la::hvDF:y", opts, &i)) != -1) { switch(c) { case 'f': params.file = optarg; break; case 'F': params.video_filter = optarg; break; case 'o': cmdline_output = optarg; break; case 'm': params.muxer = optarg; break; case 'g': selected_region.set_from_string(optarg); break; case 'c': params.codec = optarg; break; case 'r': params.framerate = atoi(optarg); break; case 'x': params.pix_fmt = optarg; break; case 'C': params.audio_codec = optarg; break; case 'R': params.sample_rate = atoi(optarg); break; case 'X': params.sample_fmt = optarg; break; case 'd': params.hw_device = optarg; break; case 'b': params.bframes = atoi(optarg); break; case 'B': params.buffrate = atoi(optarg); break; case 'l': params.enable_ffmpeg_debug_output = true; break; case 'a': #ifdef HAVE_AUDIO params.enable_audio = true; audioParams.audio_source = optarg ? strdup(optarg) : NULL; #else std::cerr << "Cannot record audio. Built without audio support." << std::endl; return EXIT_FAILURE; #endif break; case 'h': help(); break; case 'p': parse_codec_opts(params.codec_options, optarg); break; case 'v': printf("wf-recorder %s\n", WFRECORDER_VERSION); return 0; case 'D': use_damage = false; break; case 'P': parse_codec_opts(params.audio_codec_options, optarg); break; case '&': force_no_dmabuf = true; break; case 'y': force_overwrite = true; break; case '*': audioParams.audio_backend = optarg; break; default: printf("Unsupported command line argument %s\n", optarg); } } if (!force_overwrite && !user_specified_overwrite(params.file)) { return EXIT_FAILURE; } display = wl_display_connect(NULL); if (display == NULL) { fprintf(stderr, "failed to create display: %m\n"); return EXIT_FAILURE; } struct wl_registry *registry = wl_display_get_registry(display); wl_registry_add_listener(registry, ®istry_listener, NULL); sync_wayland(); if (params.codec.find("vaapi") != std::string::npos) { std::cerr << "using VA-API, trying to enable DMA-BUF capture..." << std::endl; // try compositor device if not explicitly set if (params.hw_device.empty()) { params.hw_device = drm_device_name; } // check we use same device as compositor if (!params.hw_device.empty() && params.hw_device == drm_device_name && !force_no_dmabuf) { use_dmabuf = true; } else if (force_no_dmabuf) { std::cerr << "Disabling DMA-BUF as requested on command line" << std::endl; } else { std::cerr << "compositor running on different device, disabling DMA-BUF" << std::endl; } // region with dmabuf needs wlroots >= 0.17 if (use_dmabuf && selected_region.is_selected()) { std::cerr << "region capture may not work with older wlroots, try --no-dmabuf if it fails" << std::endl; } if (params.video_filter == "null") { params.video_filter = "scale_vaapi=format=nv12:out_range=full"; if (!use_dmabuf) { params.video_filter.insert(0, "hwupload,"); } } if (use_dmabuf) { std::cerr << "enabled DMA-BUF capture, device " << params.hw_device.c_str() << std::endl; drm_fd = open(params.hw_device.c_str(), O_RDWR); if (drm_fd < 0) { fprintf(stderr, "failed to open drm device: %m\n"); return EXIT_FAILURE; } gbm_device = gbm_create_device(drm_fd); if (gbm_device == NULL) { fprintf(stderr, "failed to create gbm device: %m\n"); return EXIT_FAILURE; } use_hwupload = params.video_filter.find("hwupload") != std::string::npos; } } check_has_protos(); load_output_info(); if (available_outputs.size() == 1) { chosen_output = &available_outputs.front(); if (chosen_output->name != cmdline_output && cmdline_output != default_cmdline_output) { std::cerr << "Couldn't find requested output " << cmdline_output << std::endl; return EXIT_FAILURE; } } else { for (auto& wo : available_outputs) { if (wo.name == cmdline_output) chosen_output = &wo; } if (chosen_output == NULL) { if (cmdline_output != default_cmdline_output) { std::cerr << "Couldn't find requested output " << cmdline_output.c_str() << std::endl; return EXIT_FAILURE; } if (selected_region.is_selected()) { chosen_output = detect_output_from_region(selected_region); } else { chosen_output = choose_interactive(); } } } if (chosen_output == nullptr) { fprintf(stderr, "Failed to select output, exiting\n"); return EXIT_FAILURE; } if (selected_region.is_selected()) { if (!selected_region.contained_in({chosen_output->x, chosen_output->y, chosen_output->width, chosen_output->height})) { fprintf(stderr, "Invalid region to capture: must be completely " "inside the output\n"); selected_region = capture_region{}; } } printf("selected region %d,%d %dx%d\n", selected_region.x, selected_region.y, selected_region.width, selected_region.height); bool spawned_thread = false; std::thread writer_thread; for (auto signo : GRACEFUL_TERMINATION_SIGNALS) { signal(signo, handle_graceful_termination); } while(!exit_main_loop) { // wait for a free buffer while(buffers.capture().ready_capture() != true) { std::this_thread::sleep_for(std::chrono::microseconds(500)); } buffer_copy_done = false; request_next_frame(); while (!buffer_copy_done && !exit_main_loop && wl_display_dispatch(display) != -1) { // This space is intentionally left blank } if (exit_main_loop) { break; } auto& buffer = buffers.capture(); //std::cout << "first buffer at " << timespec_to_usec(get_ct()) / 1.0e6<< std::endl; if (!spawned_thread) { writer_thread = std::thread([=] () { write_loop(params); }); spawned_thread = true; } buffer.base_usec = timespec_to_usec(buffer.presented); buffers.next_capture(); } if (writer_thread.joinable()) { writer_thread.join(); } for (size_t i = 0; i < buffers.size(); ++i) { auto buffer = buffers.at(i); if (buffer && buffer->wl_buffer) wl_buffer_destroy(buffer->wl_buffer); } if (gbm_device) { gbm_device_destroy(gbm_device); close(drm_fd); } return EXIT_SUCCESS; } 07070103197DAB000081A40000000000000000000000016705807700001AC6000000000000003400000000000000000000002800000000wf-recorder-0.5.0+git1/src/pipewire.cpp#include "pipewire.hpp" #include "frame-writer.hpp" #include <iostream> #include <spa/param/audio/format-utils.h> PipeWireReader::~PipeWireReader() { pw_thread_loop_lock(thread_loop); if (stream) { spa_hook_remove(&stream_listener); if (pw_stream_get_state(stream, nullptr) != PW_STREAM_STATE_UNCONNECTED) pw_stream_disconnect(stream); pw_stream_destroy(stream); } pw_thread_loop_unlock(thread_loop); pw_thread_loop_stop(thread_loop); if (core) { spa_hook_remove(&core_listener); pw_core_disconnect(core); } if (context) pw_context_destroy(context); pw_thread_loop_destroy(thread_loop); pw_deinit(); delete []buf; } static void on_core_done(void *data, uint32_t id, int seq) { PipeWireReader *pr = static_cast<PipeWireReader*>(data); if (id == PW_ID_CORE && pr->seq == seq) pw_thread_loop_signal(pr->thread_loop, false); } static void on_core_error(void *data, uint32_t, int, int res, const char *message) { PipeWireReader *pr = static_cast<PipeWireReader*>(data); std::cerr << "pipewire: core error " << res << " " << message << std::endl; pw_thread_loop_signal(pr->thread_loop, false); } static const struct pw_core_events core_events = { .version = PW_VERSION_CORE_EVENTS, .info = nullptr, .done = on_core_done, .ping = nullptr, .error = on_core_error, .remove_id = nullptr, .bound_id = nullptr, .add_mem = nullptr, .remove_mem = nullptr, .bound_props = nullptr, }; bool PipeWireReader::init() { buf = new uint8_t[params.audio_frame_size * 4]; int argc = 0; pw_init(&argc, nullptr); thread_loop = pw_thread_loop_new("PipeWire", nullptr); context = pw_context_new(pw_thread_loop_get_loop(thread_loop), nullptr, 0); if (!context) { std::cerr << "pipewire: context_new error" << std::endl; return false; } pw_thread_loop_lock(thread_loop); if (pw_thread_loop_start(thread_loop) < 0) { std::cerr << "pipewire: thread_loop_start error" << std::endl; pw_thread_loop_unlock(thread_loop); return false; } core = pw_context_connect(context, nullptr, 0); if (!core) { std::cerr << "pipewire: context_connect error" << std::endl; pw_thread_loop_unlock(thread_loop); return false; } pw_core_add_listener(core, &core_listener, &core_events, this); return true; } static void on_stream_process(void *data) { PipeWireReader *pr = static_cast<PipeWireReader*>(data); struct pw_buffer *b = pw_stream_dequeue_buffer(pr->stream); if (!b) { std::cerr << "pipewire: out of buffers: " << strerror(errno) << std::endl; return; } for (uint32_t i = 0; i < b->buffer->n_datas; ++i) { struct spa_data *d = &b->buffer->datas[i]; memcpy(pr->buf + pr->buf_size, d->data, d->chunk->size); pr->buf_size += d->chunk->size; while (pr->buf_size >= pr->params.audio_frame_size) { frame_writer->add_audio(pr->buf); pr->buf_size -= pr->params.audio_frame_size; if (pr->buf_size) memmove(pr->buf, pr->buf + pr->params.audio_frame_size, pr->buf_size); } } if (!pr->time_base) pr->time_base = b->time; pw_stream_queue_buffer(pr->stream, b); } static const struct pw_stream_events stream_events = { .version = PW_VERSION_STREAM_EVENTS, .destroy = nullptr, .state_changed = nullptr, .control_info = nullptr, .io_changed = nullptr, .param_changed = nullptr, .add_buffer = nullptr, .remove_buffer = nullptr, .process = on_stream_process, .drained = nullptr, .command = nullptr, .trigger_done = nullptr, }; static void on_registry_global(void *data, uint32_t, uint32_t, const char *type, uint32_t, const struct spa_dict *props) { PipeWireReader *pr = static_cast<PipeWireReader*>(data); if (strcmp(type, PW_TYPE_INTERFACE_Node) != 0) return; const char *name = spa_dict_lookup(props, PW_KEY_NODE_NAME); if (!name || strcmp(pr->params.audio_source, name) != 0) return; const char *media_class = spa_dict_lookup(props, PW_KEY_MEDIA_CLASS); if (!media_class) return; pr->source_found = true; pr->source_is_sink = strcmp(media_class, "Audio/Sink") == 0; } static const struct pw_registry_events registry_events = { .version = PW_VERSION_REGISTRY_EVENTS, .global = on_registry_global, .global_remove = nullptr, }; void PipeWireReader::start() { struct pw_properties *props = pw_properties_new(PW_KEY_MEDIA_TYPE, "Audio", PW_KEY_MEDIA_CATEGORY, "Capture", PW_KEY_MEDIA_ROLE, "Screen", PW_KEY_STREAM_CAPTURE_SINK, "true", PW_KEY_NODE_NAME, "wf-recorder", NULL); if (params.audio_source) { struct pw_registry *registry = pw_core_get_registry(core, PW_VERSION_REGISTRY, 0); if (registry) { struct spa_hook registry_listener; pw_registry_add_listener(registry, ®istry_listener, ®istry_events, this); seq = pw_core_sync(core, PW_ID_CORE, seq); pw_thread_loop_wait(thread_loop); if (!source_found) { std::cerr << "pipewire: source " << params.audio_source << " not found, using default" << std::endl; } else { pw_properties_set(props, PW_KEY_STREAM_CAPTURE_SINK, source_is_sink ? "true" : "false"); pw_properties_set(props, PW_KEY_TARGET_OBJECT, params.audio_source); } spa_hook_remove(®istry_listener); pw_proxy_destroy(reinterpret_cast<struct pw_proxy*>(registry)); } } stream = pw_stream_new(core, "wf-recorder", props); pw_stream_add_listener(stream, &stream_listener, &stream_events, this); uint8_t buffer[1024]; struct spa_pod_builder b = SPA_POD_BUILDER_INIT(buffer, sizeof(buffer)); struct spa_audio_info_raw info = {}; info.format = SPA_AUDIO_FORMAT_F32_LE; info.rate = params.sample_rate; info.channels = 2; const struct spa_pod *audio_param = spa_format_audio_raw_build(&b, SPA_PARAM_EnumFormat, &info); pw_stream_connect(stream, PW_DIRECTION_INPUT, PW_ID_ANY, static_cast<enum pw_stream_flags>(PW_STREAM_FLAG_AUTOCONNECT | PW_STREAM_FLAG_DONT_RECONNECT | PW_STREAM_FLAG_MAP_BUFFERS), &audio_param, 1); pw_thread_loop_unlock(thread_loop); } uint64_t PipeWireReader::get_time_base() const { return time_base / 1000; } 07070103197DAC000081A400000000000000000000000167058077000002D8000000000000003400000000000000000000002800000000wf-recorder-0.5.0+git1/src/pipewire.hpp#ifndef PIPEWIRE_HPP #define PIPEWIRE_HPP #include "audio.hpp" #include <pipewire/pipewire.h> class PipeWireReader : public AudioReader { public: ~PipeWireReader(); bool init() override; void start() override; uint64_t get_time_base() const override; struct pw_thread_loop *thread_loop = nullptr; struct pw_context *context = nullptr; struct pw_core *core = nullptr; struct spa_hook core_listener; struct pw_stream *stream = nullptr; struct spa_hook stream_listener; int seq = 0; bool source_found = false; bool source_is_sink = false; uint8_t *buf = nullptr; size_t buf_size = 0; uint64_t time_base = 0; }; #endif /* end of include guard: PIPEWIRE_HPP */ 07070103197DAD000081A400000000000000000000000167058077000007DE000000000000003400000000000000000000002500000000wf-recorder-0.5.0+git1/src/pulse.cpp#include "pulse.hpp" #include "frame-writer.hpp" #include <iostream> #include <vector> #include <cstring> #include <thread> bool PulseReader::init() { pa_channel_map map; std::memset(&map, 0, sizeof(map)); pa_channel_map_init_stereo(&map); pa_buffer_attr attr; attr.maxlength = params.audio_frame_size * 4; attr.fragsize = params.audio_frame_size * 4; pa_sample_spec sample_spec = { .format = PA_SAMPLE_FLOAT32LE, .rate = params.sample_rate, .channels = 2, }; int perr; std::cerr << "Using PulseAudio device: " << (params.audio_source ?: "default") << std::endl; pa = pa_simple_new(NULL, "wf-recorder3", PA_STREAM_RECORD, params.audio_source, "wf-recorder3", &sample_spec, &map, &attr, &perr); struct timespec ts; clock_gettime(CLOCK_MONOTONIC, &ts); this->monotonic_clock_start = ts.tv_sec * 1000000ll + ts.tv_nsec / 1000ll; int error = 0; uint64_t latency_audio = pa_simple_get_latency(pa, &error); if (latency_audio != (pa_usec_t)-1) { monotonic_clock_start -= latency_audio; } if (!pa) { std::cerr << "Failed to connect to PulseAudio: " << pa_strerror(perr) << "\nRecording won't have audio" << std::endl; return false; } return true; } bool PulseReader::loop() { static std::vector<char> buffer; buffer.resize(params.audio_frame_size); int perr; if (pa_simple_read(pa, buffer.data(), buffer.size(), &perr) < 0) { std::cerr << "Failed to read from PulseAudio stream: " << pa_strerror(perr) << std::endl; return false; } frame_writer->add_audio(buffer.data()); return !exit_main_loop; } void PulseReader::start() { if (!pa) return; read_thread = std::thread([=] () { while (loop()); }); } PulseReader::~PulseReader() { if (pa) read_thread.join(); } uint64_t PulseReader::get_time_base() const { return monotonic_clock_start; } 07070103197DAE000081A400000000000000000000000167058077000001C9000000000000003400000000000000000000002500000000wf-recorder-0.5.0+git1/src/pulse.hpp#ifndef PULSE_HPP #define PULSE_HPP #include "audio.hpp" #include <pulse/simple.h> #include <pulse/error.h> #include <thread> class PulseReader : public AudioReader { pa_simple *pa; bool loop(); std::thread read_thread; uint64_t monotonic_clock_start = 0; public: ~PulseReader(); bool init() override; void start() override; uint64_t get_time_base() const override; }; #endif /* end of include guard: PULSE_HPP */ 07070100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor