-
Notifications
You must be signed in to change notification settings - Fork 236
- We like to keep disconnected from the internet most of the time. How long can a Psychtoolbox license work offline?
- We have a computer, where a network connection is not allowed. Is there a way to get a license that can be permanently used offline?
- How can I manage my subscriptions?
- How do I prevent software license subscriptions from automatically renewing and rebilling?
- How do I cancel a subscription after the end of its current subscription cycle?
- I changed my mind. Can I return my license key and get a refund?
- How can I get an invoice for my purchase?
- How can I get a formal quote for my purchase?
- Our institution is exempted from sales tax / VAT / GST. How can we avoid paying tax?
Q: Where can I find a discussion thread about the new software licensing model?
A: Follow this link to a discussion thread on our user forum.
Q: How can I manage my subscriptions?
A: Our reseller FastSpring provides a customer self service subscription management portal. Once you have purchased a subscription, then the email receipt you have received will contain links to this customer self service portal, once directly above the license key and below the subscription terms, called "Full Terms and Subscription Management" (blue link in image above), and also another link is "Manage Your Orders" at the beginning of the email receipt. These identical links lead you to the self service portal, which allows you, among other things (see image above), to switch your subscription from automatic rebilling to manual payment, to cancel your subscription, to update your payment methods and information for automatic rebilling of subscriptions. It allows you to get invoices as well.
Additionally, FastSpring customer support can assist you with various tasks under this link
Q: How do I prevent software license subscriptions from automatically renewing and rebilling?
A: By default, the FastSpring online shop's product checkout page will generate an auto-renewing and auto-rebilling software subscription. Your payment data, e.g., credit car or PayPal information will be securely stored by FastSpring, and FastSpring will try to charge the stored payment method at the end of a subscription cycle. You will be informed about this via email. On successful rebill, your subscription will renew by another subscription cycle, e.g., one year. If you don't want your subscription to be automatically rebilled, and thereby money withdrawn from your funds, you have two options:
- Before purchase: Uncheck the "Securely save payment details for automated subscription renewal" checkbox (orange checkbox in the image). This will prevent storage of your payment info beyond your current session and prevent automatic rebilling. Instead you will receive a payment reminder email at least one week before your subscription runs out and your Psychtoolbox and associated license will disable itself. The email will contain instructions on how to manually pay for another subscription cycle if you want. Choosing manual payment this way will also offer additional payment methods to you, e.g., pay by check, wire transfer, AliPay in China, Klarna in the Netherlands etc., ie. methods which do not support automatic rebilling.
- After purchase: If you have already purchased a subscription, then the email receipt you have received will contain links to a customer self service portal, as explained in the answer to the previous question. The self service portal allows you to switch your subscription from automatic rebilling to manual payment, by clicking the blue "Disable" link next to the text "Automatically pay every cycle" (see image above).
Q: How do I cancel a subscription after the end of its current subscription cycle?
A: The most easy way to do this is to use the customer self service portal. See the previous question for how to get there. Each of your subscriptions is listed there, and right of the subscription is a blue "Manage" drop-down menu button, with one of the options being to "Cancel subscription". Selecting that option and confirming cancellation will cancel renewal of your subscription at the end of its current cycle. That means you can continue using Psychtoolbox until the end of the already paid for subscription period. Once that period runs out, the subscription will terminate, you won't get billed anymore, and Psychtoolbox mex files will cease to work. Note that, should you change your mind, you can "Un-Cancel" a subscription that has been cancelled by you with an option in the same "Manage" drop-down menu, right until the end of the current subscription cycle.
Q: I changed my mind. Can I return my license key and get a refund?
A: In short, yes, up to 14 days after purchase of a subscription license, and if you haven't used the key yet, or have deactivated the license key on all your computers. See our refund policy and procedures under this link.
Q: How can I get an invoice for my purchase?
A: The order receipt email that is sent to you has a link at the bottom, to get a downloadable invoice. In the customer self service portal (see questions and pictures above), as part of the description of your subscription, there is also a blue link "Billing history", which will show your subscriptions billing history, with links to a downloadable invoice for each billing.
Q: How can I get a formal quote for my purchase?
A: When you select a subscription plan, then at the top of the product description there will be a blue link "Generate a Quote". Clicking it will ask for information like your name, the name and address of your institution, etc. You can also specify a purchase order number suitable for your administrative purposes. From there, a formal quote will be generated for download. The quote will also allow you to generate a corresponding invoice if you accept the quote right away, and the invoice leads to the payment interface for online payment, or for receiving bank transfer instructions.
Q: Our institution is exempted from sales tax / VAT / GST. How can we avoid paying tax?
A: If your institution is tax exempted from value added tax VAT, general sales tax GST, and similar taxes, and you know the tax registration id number of your institution, e.g., the VAT Id for purchases from Europe, then you can enter this id number, and as a consequence all formal quotes, invoices, and marked out prices will be performed using the net price, and no tax will be withheld from you. In the product checkout interface (see picture above, specifically the "Your Payment" box), right below the display of the total subscription price and the applicable tax, you will find a blue link named "Enter VAT Id", or "Tax number", or a similar country specific name. This link will allow you to enter your organizations registration number. Other parts of the payment process may also have a similar link close to the display of sales tax.
Q: We like to keep disconnected from the internet most of the time. How long can a Psychtoolbox license work offline?
A: Our current subscription plans state that you can use Psychtoolbox “up to 120 days offline by default”.
In reality, a freshly bought license allows 30 days offline use. This is a hard limit imposed by some deficiency of Psychtoolbox 3.0.20. Any larger offline grace period would make the mex files fail. Right now we have almost two thousand machines still using Psychtoolbox 3.0.20, and it would be a bad user experience if Psychtoolbox 3.0.20 would immediately shut down as a "reward" for buying a license, and force them to immediately upgrade. So that’s why a license starts with “safe” 30 days offline.
If you install Psychtoolbox 3.0.21 or later, then we can set any offline limit we want on your license, which is why we strongly recommend everybody upgrade from 3.0.20 to 3.0.21 as soon as possible! We are considering pulling all 3.0.20 release zip files, to prevent new users from accidentally downloading these old 3.0.20 releases.
While there are still many old 3.0.20 versions out there, we will periodically run a script on our side that checks if a license is used with only Psychtoolbox 3.0.21 or later, and bump up the offline limit to 120 days if that is the case. But immediately after activating a license the limit will be 30 days.
The offline limit is a protection for us against fraudulent customers, the longer the offline limit, the less protection for us against abuse, so we will probably adjust that, once we know how people generally behave.
However, the periodic online license server sync is also a way for us to offer some potentially interesting services to you in the long term. E.g., Psychtoolbox 3.0.21 allows us to convey some simple one-line text messages to you, for important news or useful advice, maybe in the future even context dependent: E.g., if we would know that you are using the license with Apple Silicon Macs and there would be a new Psychtoolbox release with significant improvements for Apple Silicon, we could text message those users with a short one-time announcement about the benefit of upgrading to the latest release. That is just one example. These info messages are only pulled during a server sync. Also, the sync may allow for some interesting new way of user support.
Q: We have a computer for which a network connection is not allowed. Is there a way to get a license that can be permanently used offline?
A: Currently sold Psychtoolbox subscription licenses do not allow fully offline use by default. We may in the future consider offering dedicated offline licenses. That said, right now, an already bought standard license can be manually reconfigured for full offline use on request, and only at our discretion. However, after reconfiguration, such licenses can no longer be managed by yourself, e.g., they can not be moved from one machine to another one anymore. After a request was made and granted, reconfiguration of a license for offline use can take multiple business days, or sometimes even multiple weeks, if the only person with the necessary knowledge is very busy, sick, or on vacation. So plan plenty time ahead of your potential need for this, if you really think you need it.
Q:
A:
Q:
A:
Q:
A:
- How do I get PTB-3?
- Is PTB-3 backwards compatible with PTB-2?
- Does Psychtoolbox work with 64-bit versions of Matlab or Octave?
- Does Psychtoolbox work with Matlab R2015b?
- Windows: Why does my virus checker complain about the PTB-3 distribution?
- What's the difference between a texture, a window, and a screen?
- What do the timestamps returned by Screen('Flip') mean?
- How can I skip the verbose checks that Screen performs
- Psychtoolbox shows a blank screen and then nothing happens?!?
- How do I make the initial screen black instead of white?
- How to display images with transparent backgrounds?
- How can I update Matlab figures during the experiment?
- How do I duplicate an offscreen window?
- Can offscreen windows have multiple buffers?
- Can I set TextSize (and other parameters) for all windows/screens?
- How can I try to improve timing and performance of PTB-3 code?
- Is it possible to get 10-bit DAC resolution with PTB-3
- What is the status of 10-bit framebuffers?
- When trying to play movies with GStreamer, Matlab crashes! What can I do?
- How to send TTL triggers?
- Are there known issues with Windows Vista and how to resolve them?
- Psychtoolbox tells me that the clock and timers on my computer are broken! What now?!?
- Mac 10.10 VBL Sync Issues?
- How do I add a FAQ & answer to this list?
Mathworks sponsored the following FAQ entries, as well as the FAQ's about the new business model, in addition to sponsoring some refinements to the FAQ's above.
- How can I take a screenshot of or record my stimuli?
- How do I close a screen window and return to the command line?
- What is the status of macOS support on Apple Silicon Macs?
- Are touchscreens supported? How do I use them?
- Can I somehow display stimuli with more than 10 bit precision per color channel?
- Is it possible to display High Dynamic Range stimuli?
- How can I play back high resolution movies, e.g., 4k HDR-10?
- How can I present multiple visual stimuli in parallel?
- Which function should I use for collecting keyboard (and mouse) input?
- How can I use Virtual Reality Head mounted displays (VR-HMDs)?
- Is it possible to present visual stimuli with very fine-grained timing?
Q: How do I get Psychtoolbox?
A: Follow the instructions provided on the website.
Q: Is PTB-3 backwards compatible with PTB-2 (Mac or Win)?
A: Not really. In developing PTB-3, it unfortunately turned out that the imaging model of PTB-2 too tied in with Apple's QuickDraw to make it work with OpenGL.
For example, in OpenGL a fundamental concept is buffer switching. The contents
of a 'front' buffer are displayed while you issue commands to affect the
contents of a 'back' buffer. A single command then flips the role of the two,
allowing for quick updating of the entire display. This concept was not part
of the PTB-2 imaging model. A consequence of this change is that if you don't
insert the flip command Screen('Flip', ...)
after your drawing operations, you
won't see the result. This change can take a little while to get used to, if
you've been programming in PTB-2.
However, if you must use old code written for PTB-2, you can switch PTB-3 into a kind of compatibility mode by adding the following command at the very top of your script:
% Enable compatibility mode to old PTB-2:
Screen('Preference', 'EmulateOldPTB', 1);
This will emulate the drawing model of PTB-2
- All drawing commands will be directed to the visible screen
immediately, without need for the
Screen('Flip', ...)
command. - The
Screen('WaitBlanking', ...)
command gets re-enabled to allow synchronization of Matlab to the vertical retrace.
This allows to run many old PTB-2 scripts without further modifications. However, there may be subtle differences between drawing commands on PTB-2 and PTB-3, so this approach does not guarantee backwards compatibility.
Although PTB-3 is not backwards compatible, there are many similarities between it and earlier versions. It is not too hard to convert old programs, and the ever-growing set of new demos provides examples.
Some of the comments and help text in PTB-3 explain differences between it and older versions, but at this point we are trying to streamline these in the interests of clarity.
Q: Does Psychtoolbox work with 64-Bit versions of Matlab or Octave?
A: Yes.
In fact, 32-bit support is no longer available in current Psychtoolbox releases, except for 32-Bit Octave for Linux on Debian/Ubuntu systems, but only in the releases provided by the NeuroDebian project and by upstream Linux distributions. Also for 32-Bit Octave on the RaspberryPi microcomputer under Raspbian.
Q: Does Psychtoolbox work with Matlab R2015b?
A: Yes, for Psychtoolbox versions 3.0.13 and later.
On older versions of Psychtoolbox, if you needed advanced OpenGL functionality through the mogl wrapper (calling OpenGL glXXX() functions directly), code failed on R2015b due to Matlab bugs. PTB 3.0.13 and later work around those Matlab bugs, so advanced OpenGL functionality works again.
Another limitation is that "matlab -nojvm" mode, ie. running with Java JVM and desktoop GUI disabled, may make Psychtoolbox unusable due to another set of Matlab bugs, which prevent OpenGL from working.
One way to side-step many of these issues caused by Matlab bugs is to use the free GNU/Octave as a free and open-source Matlab replacement instead.
Q: Why does my virus checker complain about the PTB-3 distribution on Windows?
I'm getting complaints about
Psychtoolbox\PsychContributed\.svn\text-base\nc111nt.zip.svn-base\nc.exe
Psychtoolbox\PsychContributed\.svn\text-base\nc.exe.svn-base
Psychtoolbox\PsychContributed\.svn\text-base\nc111nt.zip.svn-base
Psychtoolbox\PsychContributed\nc111nt.zip\nc.exe
Psychtoolbox\PsychContributed\nc111nt.zip
A: If you see this then you use an outdated version of Psychtoolbox and should upgrade to the latest beta release.
We used to ship the netcat
tool (a normal utility that some scanners were
allergic to) to submit registration data with the PsychtoolboxRegistration
function. This feature that helps us in finding funding is now implemented
using pnet.mex
.
Q: What's the difference between a texture, a window, and a screen?
See the article FAQ: Textures, Windows, Screens
Q: What timing information does Screen('Flip') return?
See the article FAQ: Explanation of Flip Timestamps
Q: If my script aborts with an error, I'm left with a dead Psychtoolbox window that prevents me from accessing the Matlab command window. How can I close the window in case of an error?
See the article FAQ: Exit a Crashed Screen
Q: How do I control the debugging checks when Screen starts up?
See the article FAQ: Control Verbosity and Debugging
Q: When I call Screen('OpenWindow')
my screen turns blue or white and
nothing happens anymore. What's wrong?
A: A blue screen used to be the normal behaviour, nowadays it means you use an outdated copy of PTB. Nowadays you should get a graphical splash screen, or a black screen if visual debugging was disabled.
After the splash, the screen switches to your selected background color (as
specified in the Screen('OpenWindow')
call) immediately, and your script
takes control.
Until your first call to Screen('Flip', ...)
command to change it the screen
stays blank. If it stays blank, your program may be stuck, waiting for
something, or it might have terminated normally or abnormally (error).
See FAQ: Exit a Crashed Screen for how to get out of a stalled Screen.
You can disable the startup screen, i.e., replace it by a black display until
calibration is finished, by issuing the command Screen('Preference', 'VisualDebuglevel', 3);
at the beginning of your script.
See also the previous FAQ on verbosity and debugging.
Q: How do I make the initial screen black instead of white?
A: When PTB is first loading (running it's various checks, etc.) it fills the screen with a welcome screen. This can be a problem for experiments that involve dark adaptation, or eye tracking.
To make this initial screen black instead of white, add this call at the beginning of your main experiment code:
Screen('Preference', 'VisualDebugLevel', 1);
Q: How to display images with transparent backgrounds?
A: It is possible in PTB to display PNG images with transparent backgrounds. It requires your image to have an alpha channel.
-
Save your image with a transparent background in PNG format and load it in Matlab.
[image map alpha] = imread('filename.png');
-
image
is the usual[m x n x 3]
RGB image matrix -
alpha
is an[m x n x 1]
matrix with the transparency (alpha) channel -
Stack the alpha layer on top of
image
as a 4th layerimagename(:,:,4) = alpha;
-
You need to enable alpha blending, so the tranparency layer gets used,
Screen('OpenWindow'); Screen('BlendFunction', win, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Q: How can I update Matlab figures during the experiment?
A: Use the drawnow
Matlab command whenever you want Matlab to update
it’s figure windows. This forces Matlab to redraw the handle graphics.
It is NOT recommended to use drawnow
inside time critical loops (e.g., where
stimulus presentation is performed).
Q: How do you create a duplicate of an offscreen window?
A: You would first create an empty offscreen window of the same size and color depth as your original offscreen window, then copy the original windows content into the new window:
Let origWin
be the offscreen window you want to duplicate, then
duplicateWin = Screen('OpenOffscreenWindow', origWin);
Screen('CopyWindow', origWin, duplicateWin);
% duplicateWin is a copy of origWin.
Q: Can offscreen windows (created with OpenOffScreenWindow
) have
multiple buffers?
A: No. It wouldn't make sense to have multiple buffers, as this concept refers to what is currently being shown on the display. Offscreen windows are not shown on the display but rather hold data that may be moved to a buffer of a display window.
Q: Is there any simple way to set TextSize
(or other such parameters) for
all windows/screens?
A: No. You have to set it for each window via the Screen('TextSize', win, ...)
command. But that's not much work, given that most people have at most
two windows.
Q: How can I improve the drawing performance and timing of my PTB-3 code?
See the article FAQ: Performance Tuning
For time critical code, use the Priority([newLevel])
command to raise Matlab to realtime-priority mode. Select the newLevel
value via newLevel = MaxPriority(...)
to get the highest suitable priority for your needs in a portable, operating system independent way.
- The linux kernel has multiple methods for managing priority. See Mario's comments on 'nice' vs 'priority' level implementations:
"The 'nice' value is only used for non-realtime scheduling, to distribute cpu time fairly (on average) across non-realtime processes. Realtime [RT] scheduling plays by different rules, and higher Priority() value means higher priority." -- mario
-
tl;dr: It's safest to trust that PTB's
Priority()
implementation will provide the most effective & reliable solution, and to leave OS priority settings untouched.
Q: Is it possible to get 10-bit DAC resolution with PTB-3?
See the article FAQ: 10-bit DACs
Q: What is the status of 10-bit frame buffer support?
See the article FAQ: 10-bit framebuffers
Q: When trying to play movies with GStreamer, Matlab crashes or gives Bus errors! What can I do?
A: Some video codecs can cause bad interactions between Matlab’s JavaVM and GStreamer on Microsoft Windows. This is currently not a solvable problem, but various workarounds are possible:
- Try a different operating system. Linux is strongly recommended.
- On Windows: Upgrade to Psychtoolbox 3.0.16 with GStreamer 1.16.0 MSVC or later and retry.
- Use GNU/Octave instead of Matlab on Windows.
- Run Matlab
-nojvm
mode, ie. with thematlab.exe -nojvm
command line switch. - For another workaround, click this link to a message on the Psychtoolbox forum.
Q: How to send TTL triggers?
A: For a long time, the parallel port was used for sending triggers and it is possibly still the best option regarding timing. However, parallel port solutions are very platform-dependent and might not be available anymore anyway. So the next best is to use one of the numerous USB digital I/O interfaces (see below).
If only one or two trigger lines are needed, one can get away with a serial port and toggling the control lines using IOPort
. However, the electrical levels of a native serial port are not TTL-compatible, which normally is also true for USB-to-serial converters.
- USB interface: FAQ: TTL Triggers via USB.
- Parallel port (Linux): method 1 (incl. Octave support) - Recommended in most cases or method 2 or method 3.
- Parallel port (Win 2k/XP/Vista/7): FAQ: TTL Triggers in Windows.
Q: Are there known issues with Windows Vista / 7 and how to resolve them?
A: See the article FAQ: Vista and Windows 7
Q: Psychtoolbox and GetSecsTest
tell me my clock and timers are broken,
What gives?
A: See the article FAQ: GetSecs Test Fails
Q: Problem:
WARNING: Couldn't compute a reliable estimate of monitor refresh interval! Trouble with VBL syncing?!? ! PTB - ERROR: SYNCHRONIZATION FAILURE ! ----
A:
This used to be a problem due to macOS bugs, the symptoms are described below for historical reference. In November 2019, Psychtoolbox 3.0.16 introduced fixes for various macOS visual timing and syncing problems. We keep the following text for reference:
Try the following hack:
Switch resolutions within System Preferences --> Display Panel (Scaled). Select a different resolution then reselect the required/native resolution. Repeat this hack with each system restart.
https://groups.yahoo.com/neo/groups/psychtoolbox/conversations/topics/18518
Similar (and worse) errors are present in Mac OS 10.11, 10.12. On 10.10/10.11 almost all NVidia cards are unfixably broken wrt. reliable visual stimulation timing. On 10.12, additionally to the broken NVidia cards, many modern AMD cards are also broken. On 10.13, many (all?) Intel chips are broken as well, so that 10.13 "High Sierra" is essentially unusable for visual stimulation with trustworthy timing.
Use of Apple macOS is therefore strongly discouraged for actual research data collection if you value the quality and reproducibility of your scientific studies.
Q: How do I add a FAQ & answer to this list?
A: It's pretty easy. Just follow the steps below.
-
First you need to create a free GitHub account and log in to edit
-
Secondly, this is a place to give answers, not to ask questions. Use the Forum for that. A FAQ is a place to list commonly requested answers, so they do not need to be repeated time and time again.
-
Next, go to the FAQ page (you are already here probably) and click the Edit button
-
You can type short entries that keep coming up directly here on the FAQ page.
Our format looks like this
--- ##### Header for the FAQ Item **Q:** The question in question **A:** The succint answer
-
Longer explanations and example scripts should go on a new page and get linked from the Cookbook
-
You can choose your preferred markup syntax, but Markdown is preferred.
Q: Why did the chicken cross the street?
A: Nobody knows.
Q: How do I take a screen shot of my stimuli or record a movie file of them?
See the article FAQ: Screenshots or Recordings
Q: What is the status of macOS support on Apple Silicon Macs?
A: Since Psychtoolbox version 3.0.20, released in December 2024, Psychtoolbox does have native support for Apple macOS on Apple Silicon Macs. The software runs with native 64-Bit GNU/Octave and Matlab R2023b and later for Apple Silicon, and more importantly, can deal with the unique and special challenges of running under macOS for Apple Silicon: Psychtoolbox itself with its various 2D and 3D visual stimulus drawing and visual stimulus post-processing functions, but also the many user scripts written since the year 2007 by users and as part of 3rd party toolboxes on top of Psychtoolbox, do use the cross-platform OpenGL hardware accelerated graphics API for 2D/3D rendering and image processing. OpenGL is well supported on GNU/Linux with high quality native graphics drivers for many graphics cards (gpu's) from AMD, Intel, NVidia, Qualcomm, ARM, Samsung, Broadcom (RaspberryPi), Imagination Technologies and others, even on Apple Silicon Macs with M1 and M2 SoC's! It is also well supported with native drivers on MS-Windows and Apple macOS for Intel Macs for gpu's from AMD, NVidia and Intel. The situation on Apple Silicon macOS is different: Apple Silicon Macs use an Apple proprietary gpu (known as AGFX) and display engines (known as DCP) for rendering and display. macOS for Apple Silicon Macs does not implement OpenGL via native drivers for these Apple proprietary hardware, but instead OpenGL is implemented via an emulation layer on top of the Apple proprietary Metal graphics API and CoreAnimation framework. This Apple special snowflake implementation breaks the working principles of all existing vision science toolkits, including the highly advanced algorithms used by Psychtoolbox, with respect to visual stimulation timing and timestamping. In other words, visual stimulus presentation timing and accuracy is severely broken in Psychtoolbox versions earlier than v3.0.20, and not usable for research grade stimulation and data collection. For this reason, completely new methods for visual stimulus presentation had to be implemented for Psychtoolbox 3.0.20 on macOS on Apple Silicon. These new methods are still not feature complete compared to macOS on Intel Macs or other operating systems, but we are getting there. The functionality in Psychtoolbox 3.0.20 and later is currently beta quality and covers the basics of visual stimulus presentation, with reasonably accurate visual stimulus onset timestamps, ok presentation timing, and the ability to display at 8 bpc and 10 bpc framebuffer precision, High Dynamic Range HDR displays, as well as to drive visual stimulators from VPixx or Cambridge Research Systems. Typical Screen('Flip')
use cases should work reasonably well. Screen('AsyncFlip')
functionality or less common use cases are still missing, as is complete validation and fine-tuning. But the basic usability for visual stimulation should be there. Audio functionality, response collection via mouse and keyboard, I/O, timing and most other functionality is expected to be roughly on par with macOS for Intel Macs.
Please note that use of Psychtoolbox 3.0.20 or later will require purchase of a software subscription license.
Q: Are touchscreens supported? How do I use them?
A: Psychtoolbox supports multi-touch touchscreens on Linux and Microsoft Windows. It doesn't support touchscreen on Apple macOS, as macOS does not have any support for touchscreens at least as of macOS 15 in the year 2025. On Linux, use of multiple simultaneous touchscreens on multiple open onscreen windows is supported. On Windows, touchscreen support is more limited, e.g., only one onscreen window can receive touchscreen input, only one touchscreen can be used, and in certain scenarios visual stimulation timing may suffer when used with touchscreen input. These are limitations of Microsoft Windows, not of Psychtoolbox. Therefore Linux is the best choice for using touchscreens. On Linux, touchpads can be repurposed as multi-touch "touchscreen" devices, and each touch point provides more details on some touchscreens, e.g., touch orientation. Typing help TouchInput
provides more details and configuration tips and tips for test procedures. MultiTouchMinimalDemo.m
shows the most basic use of multitouch touchscreens, MultiTouchDemo.m
shows use while exposing all details about touches. MultiTouchPinchDemo.m
gives an example on how to detect a two finger pinch gesture for use in an experiment.
Q: Can I somehow display stimuli with more than 10 bit precision per color channel?
A: Psychtoolbox offers different methods of displaying stimuli with more than 10 bit per color channel (10 bpc) precision. On Microsoft Windows you can use 10 bpc framebuffers on professional class graphics cards from AMD and NVidia since the year 2008. Consumer graphics cards usually do not support 10 bpc output on MS-Windows, except when used in HDR-10 display mode on a HDR display monitor.
On Apple macOS for Apple Silicon Macs, 10 bpc framebuffers are supported. On Intel Macs, 10 bpc framebuffers are supported on some AMD graphics cards if the Vulkan display backend instead of the standard OpenGL backend is used, or in HDR mode. macOS claims to support 16 bit floating point framebuffers for an effective 11 bpc linear precision, but this seems to be implemented in software, not in display hardware, so it has severe timing and performance implications.
The situation on Linux is the most advanced: 10 bpc framebuffers are supported on all graphics hardware that supports 10 bit output, regardless if it is pro class hardware, or cheaper consumer hardware. E.g., all AMD graphics cards since at least the year 2006, NVidia cards since 2008, and Intel graphics cards of at least the year 2010 support 10 bit, as do modern RaspberryPi 4 and 5 models when used with Mesa open-source graphics drivers of Mesa version 24 or later. These 10 bpc framebuffers can be enabled by use of the XOrgConfCreator
assistant to create a custom XOrg configuration file for 10 bpc / 30 bit deep color. Additionally AMD graphics cards supported by AMD's open-source Vulkan driver AMDVLK also support 10 bpc framebuffers without the need for a configuration file if the Vulkan display backend of Psychtoolbox is used, or if a HDR display mode is selected on a HDR display monitor. Certain AMD graphics cards, when used with the AMDVLK Vulkan driver, can also support 16 bpc framebuffers: Stimuli get drawn/rendered and post-processed and finally stored at 16 bpc precision. Actual display of the stimuli happens with up to 12 bpc precision on current generation AMD graphics cards, either natively when connected to a 12 bpc input capable monitor via DisplayPort or HDMI, or on lower precision monitors (10 bpc or 8 bpc) via spatial dithering to simulate 12 bpc precision over a native 10 bpc or 8 bpc display. Note that this 16 bpc framebuffers have been verified to work with AMD graphics cards starting with the "Sea Islands" gpu family from around the year 2014, through Volcanic Islands and Polaris and up to the AMD Vega gpu family, discrete or as found as integrated gpu in various AMD Ryzen processors. Note that AMD introduced some serious bugs and limitations into their latest AMDVLK drivers, which appear to break 16 bpc framebuffer support on their latest graphics cards of the RDNA gpu family (Radeon RX5000 and later). We have implemented a workaround for this in Psychtoolbox and a hacked up version of the old AMDVLK v2023.Q3.3 driver, which we hope may fix this problem, but haven't had feedback from our beta tester yet, if this repairs 16 bpc framebuffers on the latest generations of AMD RDNA gpu's.
You can follow the current state of that discussion under the following link:
In general, having a 10 bpc or 16 bpc high precision framebuffer is neccessary but not sufficient for tryly 10 bpc or 12 bpc video output and display. You also need a video connection that is 10 bpc or 12 bpc capable. This usually means the display must be capable of accepting 10 bpc or 12 bpc video input and be connected via modern DisplayPort or HDMI. DVI-D display connections only support 8 bpc. Builtin analog VGA outputs of AMD and NVidia cards since around the year 2006-2008 did have 10 bit DAC's, so they were able to drive analog VGA CRT monitors at 10 bpc precision via VGA connector, DVI-A connector or DVI-I connector. If you use an active USB-C/HDMI/DisplayPort to VGA converter dongle then it will be completely up to the specific model of converter dongle if it can drive a VGA analog display with 8 bpc or 10 bpc.
As mentioned before, connecting a digital display device that can only accept 8 bpc input, e.g., via DVI-D, will usually cause the graphics card to enable hardware spatial, temporal or spatiotemporal dithering to simulate 10 bpc or 12 bpc precision on such a display. If this is an effective way to achieve 10 or 12 bpc precision, or if it is unsuitable for your specific experimental paradigm, is only something you can evaluate.
What if you need more than 10 or even 12 bpc precision? Standard consumer off the shelf graphics cards and displays can't do this, but Psychtoolbox also supports special purpose visual stimulator devices target at vision science applications. It can drive those via standard graphics cards: For display of pure grayscale images on analog VGA CRT monitors, there is the Xiangrui Li et al. "VideoSwitcher" video attenuator device for high precision luminance output with up to 16 bits luminance resolution. See help PsychVideoSwitcher
for more details. A generic attenuator driver of Psychtoolbox can be configured and calibrated for other Pelli & Zhang style video attenuators. If you need greater than 12 bpc color precision there are commercial stimulators and display devices from Cambridge Research Systems (CRS) and VPixx, e.g., the CRS Bits# and Display++ and the VPixx Datapixx 1/2/3, ViewPixx display panel and Propixx projector. Psychtoolbox can drive those with high performance for up to 14 bpc precision with CRS devices, and up to 16 bpc precision with VPixx devices. These devices are typically way more expensive than standard graphics cards combined with consumer displays, e.g., for HDR or color proofing.
The demo AdditiveBlendingForLinearSuperpositionTutorial.m' demonstrates the various methods of displaying high precision Standard dynamic range (SDR) stimuli.
BitsPlusCSFDemo.mis another demo. If you look at the
help PsychImaging` it describes additional driver methods for displaying high precision stimuli, e.g., 'UseSubpixelDrive' for driving certain monitors for high precision display of monochromatic medical images, e.g., Radiology monitors like the "Eizo RadiForce GS-521".
See also the following FAQ about High Dynamic Range (HDR) displays.
Q: Is it possible to display High Dynamic Range stimuli?
A: Yes, typical consumer class High Dynamic Range (HDR) display devices, following the HDR-10 video standard, are supported on all operating systems on suitable HDR capable graphics cards by Psychtoolbox Vulkan display backend. Specifically, AMD graphics cards under Linux are supported with up to 16 bpc floating point or 16 bpc fixed point precision, high performance and precise and trustworthy visual stimulus display timing and timestamping when the AMDVLK AMD open-source Vulkan driver is installed. Other graphics cards are currently not supported for HDR display under Linux, as of spring 2025. Suitable AMD and NVidia and Intel graphics cards are supported on MS-Windows 10 and later, but often with limited or unreliable visual stimulation timing, due to display driver bugs found for both AMD and NVidia graphics on Windows. HDR is also supported under Apple macOS for Apple Silicon Macs, and on some subset of Intel Macs, with limited performance.
Psychtoolbox supports setup of "static HDR Meta data type 1" signalling to tell a connected HDR monitor about the minimum, maximum and average luminance of stimuli, and the color gamut of the HDR mastering display used to author HDR stimuli. HDR monitors may use that information for post-processing, gamut remapping, and tone mapping for displaying HDR stimuli as close as possible to the intended way they were meant to be displayed. HDR output is supported at 10 bpc (HDR-10 standard) and 16 bit floating point.
We have multiple demos showing use of HDR display, e.g., SimpleHDRDemo.m for a simple animation, HDRViewer.m as a HDR image viewer, MinimalisticOpenGLDemo.m with the hdr flag set, to show HDR rendering of 3D scenes in OpenGL, HDRTest.m for testing the precision of HDR stimulus display via a Colorimeter. PlayMoviesDemo.m also has a hdr flag that allows to play back typical HDR movies via the GStreamer multi-media framework.
Q: How can I play back high resolution movies, e.g., 4k HDR-10, or high-speed video in general?
A: See the updated article FAQ: HD UHD 4k HDR Video Playback
Q: How can I present multiple visual stimuli in parallel?
A: For presenting visual stimuli to multiple open onscreen windows, there are two methods, a Linux specific one, and a general cross-platform one.
The cross-platform method is to use asynchronous background flips, or short "async flips". You draw your stimulus into an onscreen window as usual, then call Screen('AsyncFlipBegin', window, twhen, ...);
instead of the most commonly used Screen('Flip', window, twhen, ...);
. This will schedule a flip for that window window
at target time thwen
, just as with Screen('Flip', ...)
. The difference is that the AsyncFlipBegin subcommand will not block execution of your script until the flip has completed at or after time twhen
, but the flip for that onscreen window will execute and complete in the background, while your script continues execution. This allows to perform other tasks in parallel to the pending flip, e.g., experiment logic, response collection, equipment control etc. It also allows you to already prepare and draw next stimulus n+1 to be presented after the currently pending stimulus image n. Or to draw a stimulus for another onscree window. You can periodically check the status of the pending async flip via Screen('AsyncFlipCheck')
- has it completed already or is it still pending? And if the flip has completed, the command will return the various timestamps and status info relating to that flip, e.g., the actual visual stimulus onset timestamp. You can also pause execution of your script until a stimulus has been finally presented via completing a flip. This is done via the Screen('AsyncFlipEnd')
command - a blocking version of 'AsyncFlipCheck'. Clever use of either 'AsyncFlipEnd' or 'AsyncFlipCheck', interleaved with other script operations unrelated to visual presentation, or stimulus drawing for later stimuli, or flipping to additional onscreen windows, allows to parallelize drawing and presentation to multiple windows, or other non-visual processing like sound, response collection, equipment control, experiment control logic. Multiple demos show how to use async flips: AsyncFlipTest.m
for squeezing out more graphics performance by overlapping presentation of stimulus image n and drawing of successor stimulus image n+1. MultiWindowLockStepTest.m for presentation of stimuli to multiple onscreen windows in parallel.
The downside of async flips, apart from the higher complexity of your code, is that the background processing by multiple parallel background threads can put more stress on the graphics and display driver of your graphics card. If the driver is a bit buggy, the additional stress may expose certain driver bugs in certain situations, causing unexpected instability on some operating systems with some graphics cards and some driver versions.
Other limitations are that async flips do not work for frame sequential stereo presentation, or for presentation of stimuli with fine-grained timing on VRR displays (only supported on Linux), or for presentation by use of Psychtoolbox Vulkan display backend. The Vulkan display backend is used for displaying stimuli at more than 10 bpc color depth, or for display of High Dynamic Range HDR stimuli. Ergo, most types of stimuli work with async flips, but there are exceptions. Yet another limitation is that async flips don't work at the moment on Apple macOS for Apple Silicon Macs, as the Vulkan display backend is always used on that system. The mechanism also doesn't work for presenting to virtual reality headsets via the VR drivers.
On Linux only - iow. the following approach is not portable - one can also use the standard Screen('Flip', ...)
command to schedule a flip for a time in the future, but not pause/block execution of your script until the flip is completed. Linux can do this itself, without the need for parallel multi-threading, and can log the completion timestamps and status of completed flips. The command Screen('GetFlipInfo',...);
allows to enable and disable logging of such timestamps in the background and can retrieve the timestamps and status of past flips. This allows to draw and queue multiple stimulus images and flips for an onscreen window ahead of time, whereas async flips only allow to predraw one stimulus image ahead. In short, one can queue ahead multiple stimulus frames, the complexity of handling this is lower, and the stress to the system is also lower. Use of this Linux specific mechanism is demonstrated by the demos PerceptualVBLSyncTestFlipInfo.m
and PerceptualVBLSyncTestFlipInfo2.m
.
This Linux mechanism doesn't work in some scenarios, similar to async flips: Frame sequential stereo presentation, presentation of stimuli with fine-grained timing on VRR displays, and presentation by use of Psychtoolbox Vulkan display backend for displaying stimuli at more than 10 bpc color depth, or for display of HDR stimuli. The mechanism also doesn't work for presenting to virtual reality headsets via the VR drivers.
Note that if your goal is not so much to present to multiple display windows at a time, but simply to do parallel presentation of visual stimuli with auditory stimuli or with response collection, it may be more straightforward to use conventional Screen('Flip',...)
for a single window stimulus, but perform auditory stimulation or response collection in the background. The PsychPortAudio driver provides various facilities to schedule auditory stimulus presentation into the future, or run audio playback in the background. Cfe. BasicSoundScheduleDemo.m or BasicAMAndMixScheduleDemo.m, or PsychPortAudioTimingTest.m for auditory presentation. Psychtoolbox provides input methods for background response collection and logging/timestamping for keyboard, mouse and touchscreen input, as described in another FAQ entry.
Q: Which function should I use for collecting keyboard (and mouse) input?
A: PTB offers quite a zoo of keyboard and mouse input functions, but which one to use best depends on the application scenario.
See the article FAQ: Processing keyboard input
Q: How can I use Virtual Reality Head mounted displays (VR-HMDs)?
A: Psychtoolbox provides multiple backend drivers for Virtual Reality (VR) Head mounted displays (HMD), but only one of them - the most recent one - is of real relevance: The OpenXR backend driver (a combination of the PsychOpenXR.m M-File and the PsychOpenXRCore mex file) currently supports all VR HMD's which can be operated with an OpenXR v1.0 compatible runtime that does support stimulus rendering via OpenGL. These are to our knowledge pretty much all common VR HMD's which are currently commercially available, for Microsoft Windows and GNU Linux. E.g., all VR HMD's from Meta VR (formerly known as Oculus VR) do have a suitable OpenXR runtime made by Meta for MS-Windows. Some popular models from Meta also work on Linux via the "Monado" open-source OpenXR runtime. All VR HMD's supported by Valve's SteamVR runtime are supported on Linux and Windows, e.g., all devices from Valve, most devices from HTC, various "Windows Mixed Reality" HMD's and various others. The professional Varjo HMD's come with their own OpenXR runtime for MS-Windows. Various other vendors do have their own OpenXR runtimes, and the Monado open-source OpenXR runtime supports various additional devices on Linux to varying degrees. In other words, our OpenXR backend driver gives access to most VR HMD's on the market under MS-Windows and GNU Linux. Psychtoolbox also has a dedicated PsychOculusVR1 driver for original VR HMD's from OculusVR / Meta VR on Microsoft Windows only, all devices supported by their Oculus v1 runtime, although Meta themselves strongly recommend using OpenXR instead, so this is just a legacy backup solution. Another legacy driver PsychOculusVR is for the original Oculus Rift DK1 and DK2 developer toolkits and works on Linux and Windows with the v0.5 Oculus runtime. Again, these devices are almost no longer existent (over 13 years old by now and long out of sale) and better served by the OpenXR driver. On Linux we also have a PsychOpenHMDVR driver for HMD's supported by the open-source OpenHMD runtime. Again, better served by OpenXR.
In the future, our OpenXR driver could be extended for Augmented Reality (AR) and Mixed Reality (MR) HMD's and smart glasses and eye wear, as OpenXR covers the full range of VR/AR/MR/XR devices as a one-stop solution to "rule them all", and as a cross-platform, cross-vendor open industry standard.
That said, Apple macOS is currently not supported for VR applications at all. The reason for that is that Apple decided to not play ball with other vendors and build their own proprietary toolkit, which is completely incompatible with OpenXR. SteamVR and OculusVR used to work in very early versions on macOS, but both vendors have retracted their support for the Apple platforms. Existing very old OculusVR 0.5 runtimes for macOS were supported by old Psychtoolbox versions via the PsychOculusVR backend driver. These only worked on 32-Bit Intel processors, but not on 64-Bit Intel Macs or Apple Silicon Macs, so when macOS 10.15 removed support for 32-Bit Intel executables that was the death of our VR driver for macOS. If you download an old enough Psychtoolbox, you could still use 10+ years old Oculus Rift DK1 or DK2 HMD's on macOS versions 10.14 or earlier though. Wrt. macOS, Psychtoolbox would consider supporting macOS via a port of its OpenXR backend driver if some 3rd party would create a suitable and performant OpenXR runtime for Apple macOS, but until then macOS is a no-go.
Our OpenXR backend, just like all the older legacy backends OculusVR1, OculusVR and OpenHMDVR, is used via our high level PsychVRHMD() function. This function abstracts away all the existing backend drivers and provides a unified interface, so user scripts written against our early drivers will not need to be modified and should work on our latest OpenXR backend driver unmodified. Of course, new interesting functionality will only be added to the OpenXR backend, so if you make use of OpenXR only functionality then those scripts would not work on the old backend drivers. This should not be any concern though, as the backend drivers are really only here as backups for unforseen circumstances and will be removed soon, given that our OpenXR driver matured now over 3 years.
Our OpenXR based driver can do classic VR stimulus display: It allows to track and report the head position and orientation of the user, as well as the tracked pose of hand controllers (VR controllers), and button presses and other input via the controller. Based on this information, the users script can use Psychtoolbox OpenGL low-level rendering functions, or Psychtoolbox binding to 3rd party Horde3D graphics engine, to render stereoscopic 3D scenes of varying complexity, and then display those scenes to the HMD. The usual Psychtoolbox 3D drawing and rendering functions, and stereo display functions are used to render and display the stimulus. It is very easy to convert any standard stereoscopic rendering 3D script, e.g., for binocular display on a haploscope or via 3D shutter glasses or other stereoscopic displays, into a
script for VR display. A single setup call hmd = PsychVRHMD('AutoSetupHMD', 'Tracked3DVR');
is enough in the most simple case. Various PsychVRHMD subfunctions allow to augment and improve on that most simple case, e.g., query optimal rendering parameters and OpenGL projection matrices for the VR HMD via PsychVRHMD('GetStaticRenderParameters', hmd);
, or get tracked head pose and OpenGL Modelview matrices via state = PsychVRHMD('PrepareRender', hmd);
which will return a state
struct with various info about head pose and other useful information. These very simple conversions of an existing 3D stereoscopic rendering script into a VR script, which should take typically less than 5 minutes of work time, are demonstrated by various Psychtoolbox demos scripts, e.g., VRHMDDemo1.m
, MorphDemo.m
and SuperShapeDemo.m
. Our Horde3D engine integration also comes with the HordeDemo.m
for a complex rendered scene. All these demos have been extended with fully head tracked 3D rendered VR.
Our driver can also perform display of stereoscopic/binocular stimuli without head tracking or even 3D rendering. In other words, it allows to replace a classic stereo display device with a HMD, acting as a stereo monitor wrapped around the head of the subject. For that, a simple one-line setup call PsychVRHMD('AutoSetupHMD', 'Stereoscopic');
has to be added to an existing stereo script. This is demonstrated in ImagingStereoDemo.m
for dynamic binocular stereo stimuli, and ImagingStereoMoviePlayer.m
for playing back stereoscopic movies, as well as VRHMDDemo.m
.
Yet another use case is monoscopic display of any visual stimulus, basically acting as a display monitor wrapped around the subjects head. The setup call here is hmd = PsychVRHMD('AutoSetupHMD', 'Monoscopic');
. Demos that show this are GazeContingentDemo.m
, MovingLineDemo.m
, and FlipTimingWithRTBoxPhotoDiodeTest.m
.
Beyond head tracking and VR tracked controller input, e.g., from Meta's "Oculus Touch" controllers or Valve's "Steam controllers", our driver also supports eye gaze tracking of the users gaze on HMD with builtin eye trackers. This has been extensively tested with the HTC Vive Pro Eye HMD under MS-Windows, but there are other HMD's which support this. Our demos GazeContingentDemo.m
for 2D monoscopic stimuli, VRHMDDemo.m
and VREyetrackingTest.m
as well as VRInputStuffTest.m
demonstrate eye tracking, both in 2D screen space reporting 2D gaze fixation points, and in 3D space reporting "gaze rays" for intersection with 3D surfaces or scene elements. HTC HMD's are especially well supported at the moment on MS-Windows for binocular eye gaze tracking, reporting of eye opening and pupil diameter, and fast tracking, utilizing HTC's SRAnipal api. Other HMD's provide monocular eye gaze reporting vie the XR_eye_gaze_interaction extension at the moment.
A new feature is also tracking of hand poses and finger configuration via hand tracking, either by built in vision based hand tracking in some HMD's, or via connection of 3rd party hardware, e.g., Ultraleap Leapmotion controllers under Linux and MS-Windows. This allows to report the 3D position and orientation of the users hands, and the 26 finger joint angles of all the fingers of each hand, providing detailed reporting of hand configuration in space. While most of these hand tracking systems are based on at least stereoscopic cameras and computer vision for hand tracking, our backend allows to interface with any hand tracking technology supported by OpenXR, e.g., 3D gloves, marker based or magnetic tracking, etc. The hand tracking is currently demonstrated via the demo VRInputStuffTest.m
, which also demonstrates all other input modalities like eye gaze tracking, controller tracking and input, head tracking etc.
One limitation of all commercial HMD's and runtimes, which are focused on typical consumer VR, is that they do not provide a reliable and accurate mechanism to report timestamps of when a VR stimulus was presented to the user, or to schedule when exactly a visual stimulus should be presented to the user. Usually no mechanism for this is provided at all in common VR and XR runtimes. In other words, the typical timestamps that can be passed into (tWhen
) or returned by (tVblank
, tOnset
) the [tVblank, tOnset] = Screen('Flip', win, tWhen, ...);
command as part of your typical Psychtoolbox script, will not be very reliable and trustworthy - or reliable at all, when the stimulus is presented to a VR HMD with typical VR runtimes, including OpenXR. This is a significant limitation for use in vision science and neuroscience paradigms. Psychtoolbox has some hacks to try to remedy this problem as much as possible, to at least allow some basic level of timing and scheduling, but results are highly dependent on use case and setup. Psychtoolbox also has some hacky solution for a part of the problem, the accurate and trustworthy reporting of visual stimulus onset timestamps, but only when used under GNU Linux with the Monado open-source OpenXR runtime. The setup for this is slightly fiddly, but workable. The problem of easy to use timestamping or timed stimulus onset is so far unsolved. This makes Psychtoolbox the best solution if you need precise timing and control, but only in the sense that it is the only solution that tries with some success, ie. "the one eyed is king among the blind". I have been working on a prototype for a proper open-source solution for this, again based on the Linux open-source OpenXR runtime Monado, and work continues, so stay tuned for a proper, clean, reliable solution in the future.
The current VR support of Psychtoolbox has been tested by us on the Oculus Rift DK-1, Oculus Rift DK-2, Oculus Rift CV-1, and then HTC Vive Pro Eye, but as mentioned, it should work on any OpenXR compliant HMD.
Q: Is it possible to present visual stimuli with very fine-grained timing?
A: Yes, on desktop GNU Linux, with suitable gpu's and drivers and display devices which are capable of Variable Refresh Rate (VRR). This is in principle possible on the hardware side via suitable HDMI or DisplayPort/MiniDisplayPort/USB-C/eDP video connections, short DisplayPort DP. HDMI 2.1 defines VRR, and there also exists an AMD specific HDMI protocol for VRR preceeding the HDMI 2.1 standard. DisplayPort supports VRR via the so called "Vesa DisplayPort adaptive sync" protocol extensions or eDP panel self-refresh support for builtin laptop display panels connected via eDP. Marketing names or variants of this VRR support are known as "HDMI VRR", "Vesa Displayport adaptive sync", AMD FreeSync, or NVidia GSync. Psychtoolbox has specialized algorithms to take advantage of such VRR display modes instead of regular fixed refresh rate (FRR) display modes. These algorithms are currently only supported on Linux, not on MS-Windows or Apple macOS. On Linux in principle Intel graphics cards of generation 11 "Ice Lake" or later, NVidia graphics cards of the Kepler gpu family or later with the proprietary NVidia driver, and AMD graphics cards are supported. In practice, Psychtoolbox has not been tested for quality and reliability and accuracy on Intel graphics due to lack of suitable hardware, so Intel VRR may work just fine or it may not, only one way to find out - test it! On NVidia gpu's it has been tested and works ok. Best performance, reliability and accuracy is to be expected on AMD FreeSync capable graphics cards - pretty much all graphics cards since the AMD Sea Islands gpu family introduced around the year 2014 - when used via DisplayPort connection for external displays, embedded DisplayPort connection eDP for suitable laptop internal panels, or via AMD's own proprietary HDMI AMD VRR extension. The HDMI 2.1 VRR standard is not supported by AMD open-source drivers due to HDMI licensing restrictions.
In short, we strongly recommend an AMD FreeSync capable gpu, with a suitable FreeSync compatible display connected via DisplayPort or embedded DisplayPort. HDMI VRR may or may not work. NVidia GSync capable gpu's with proprietary driver can work, but with degraded precision and reliability. Mario Kleiner contributed significant improvements to the FreeSync implementation of the AMD Linux open-source display driver amdgpu-kms which improve performance and reliability when using this on Linux + AMD graphics. No such improvements were possible for NVidia's proprietary driver.
If you have a suitable gpu, then there are currently two methods shipping in Psychtoolbox for taking advantage of VRR:
-
Psychtoolbox "Fine grained timing" support works on all supported graphics cards: You need to enable VRR support via XOrgConfCreator + XOrgConfSelector to create and install a custom xorg.conf X configuration file and then logout and login again to enable VRR support once. After this one-time setup, you can choose a Psychtoolbox onscreen fullscreen window to use fine-grained timing via VRR by use of the imaging pipeline setup command
PsychImging('AddTask', 'General', 'UseFineGrainedTiming');
. This will request VRR display mode for fullscreen onscreen windows, run some diagnostics to ensure reliable operation and some calibrations, and then enable the special VRR scheduling algorithm. From there on, when using aScreen('Flip', window, tWhen, ...);
command to request a specific target stimulus onset timetWhen
, Psychtoolbox will no longer try to present the stimulus at the start of the earliest video refresh cycle at or soon aftertWhen
. Instead the algorithm will try to make clever use of the properties of VRR displays to present the stimulus as close as possible to the exact requestedtWhen
target time. This means that the spacing of stimulus presentation in time, or the duration of presentation of a specific visual stimulus is no longer locked to or quantized to the duration of a regular video refresh cycle, e.g., 8.333 msecs on a display running at 120 Hz refresh rate. Instead, spacing or duration between stimuli can be pretty much anything of at least one video refresh cycle duration, e.g., for a 120 Hz display anything beyond 8.333 msecs, e.g., 9 msecs, 11.5 msecs, 123.2 msecs etc. This allows for very fine grained choice of inter-stimulus-intervals (IFI's) or stimulus duration, and the target time can be canged on a flip by flip basis. Selecting a fixed interval allows playback of movies or animations at arbitrary fixed framerates, instead of framerates which are integral fractions of the video refresh rate. E.g., on a 120 Hz display you are no longer restricted to 120 fps, 60 fps, 40 fps, 30 fps, 24 fps, 20 fps, ... but could also play an animation at 118 fps, 73 fps, 1.234 fps etc. So far the theory. In practice some current software limitations exist in current VRR driver implementations, which can cause some unreliable presentation timing in certain hard to define corner cases, and also the stimulus presentation timing may be a bit noisy, ie. requested times are not hit exactly, but with a bit of jitter around requested target times, typically with a standard deviation significantly below 0.5 msecs. I do have a working prototype that implements a more advanced scheduling algorithm inside the Linux amdgpu kernel driver, and this method has shown excellent results with very high reliability, stability and precision, and especially on modern AMD gpu's with the DCN display engine, e.g., Ryzen processor integrated graphics, or AMD Navi/RDNA graphics cards, possibly also AMD Vega graphics cards. Lack of financial funding for Psychtoolbox forced us to pause further work on turning the very promising prototype into something suitable for production use though. Pending sufficient financial funding in the future, we may resume work on this project for a really good solution. The fine-grained timing is demonstrated, and can be tested with the scriptVRRTest.m
, that can run the display through various advanced visual stimulation timing schedules and compare requested stimulus onset times against actually achieved onset times. This also supports various hardware measurement methods to provide ground truth. -
For AMD FreeSync graphics cards on Linux only, there is a second mode available since Psychtoolbox 3.0.22, so called "fast, seamless refresh rate switching": This mode takes advantage of a special AMD specific FreeSync mechanism that allows to define certain special video modes, which only differ in fixed video refresh rate FRR by playing tricks with vblank duration, and switch between those special modes very quickly, with minimal visual disruption. Psychtoolbox can now define such special video modes on the fly and trigger a switch between such modes. For this there is the new subfunction
actualHz = Screen('ConfigureDisplay', 'FineGrainedSwitchRefreshRate', screenNumber, outputId, reqHz);
, which allows to request a fast switch to a new video refresh ratereqHz
on a given video outputoutputId
on a specific X-ScreenscreenNumber
. The function will trigger an instant switch and then report the actual new refresh rate asactualHz
. Switching the refresh rate of all video outputs of a X-Screen can also be achieved viaactualHz = Screen('FrameRate', window, 2, reqHz);
. The display will switch to the newreqHz
refresh rate, or a rateactualHz
very close to it, within one video refresh cycle duration. During this switch, the display may blank or flicker for a short time, typically less than 16 milliseconds. On other graphics cards, the switch would take over a second, possibly multiple seconds, during which displays go dark. This functionality is useful if you want to play back dynamic stimulus animations or movies at constant framerate during a trial, but with different framerates across different trials. You can switch rapidly between trials, or by taking a ~20 msecs break if you want to switch within a trial or an animation. Refresh rates can be selected at milliHz granularity or better, e.g., 57.3 Hz for a 57.3 fps framerate, 57.302 Hz etc. This can be useful for controlled movie playback or for steady state visually evoked potentials (SSVEPs) paradigms and similar use cases. This functionality is demonstrated and tested by the scriptVRRFixedRateSwitchingTest.m
Q:
A: TBD
This wiki is complementary to the main website at http://psychtoolbox.org
Please feel encouraged to edit these pages and add helpful content. Take care.