Grafx bot firefox bit


So the title of this OSNews story is inaccurate. The FGLRX driver is crashier, it's blacklisted at the moment, this could change everything hopefully will change: Just launch firefox with this command you can use it in the properties of your desktop icon, too: This was the top reason for crashiness on linux.

We are looking forward to un-blacklisting drivers as soon as they get good enough, see the discussion in this bug scroll down past the first comments sent by an angry user: If the manufacturers did release proper specs for their cards maybe we could have first class drivers for Xorg. Geez, so many fickle users out there. Most don't appreciate even a little that it was Firefox that stirred up the browser wars, when the alternatives were a sluggish Netscape and an anti-standards IE.

So you're Firefox is 'sluggish'? Sounds like you have other issues on your system too. Also don't forget Google only offered Chrome to Windows users for quite a while, leaving Linux users with a somewhat supported 'build your own' option of Chromium.

Their excuse was a public statement about how it was too difficult and problematic to offer Linux or OS X versions. Yet Firefox and Opera have been popping out concurrent versions for multiple platforms for years. OK, well Opera has been concurrent version-wise only recently, but their developers are too busy innovating unique ideas that other browsers pick up on. Also, for WebGL which is enabled on linux if your driver is whitelisted , the best way to test is to run the official WebGL conformance test suite: Click 'run tests', copy your results in a text file and attach it to this bug: If a driver can pass almost all these tests and doesn't crash running them Looking forward to enabling the whitelist once we get more data.

I hope to convince developers of GL drivers to use it to test their drivers against. Another way that the title of this story is inaccurate is that we do have hardware acceleration on linux thanks to XRender and we have had for years.

So if your drivers have a good XRender implementation then your Firefox can blow the competition into orbit in 2D graphics benchmarks such as: What's blacklisted on buggy X drivers is OpenGL. It is used for WebGL, and for accelerated compositing of the graphics layers in web pages. However, for the latter compositing , we are still working on resolving performance issues in the interaction with XRender, and that won't make it into Firefox 4, so we don't enable accelerated compositing by default regardless of the driver blacklist so if you want accelerated compositing at the risk of losing the benefit of XRender you have to go to about: I'm happily using it here, and it can double the performance in fullscreen WebGL demos.

The blame probably could be directed toward the graphics hardware manufacturers with far more accuracy and truth than the X. So you get WebGL right away. If you want accelerated compositing too at the risk of losing the benefit of XRender go to about: There are specs for Intel chips and drivers from Intel.

Is this of any help? So even "innocuous" graphics driver bugs can suddenly become major security issues e. Even a plain crash is considered a DOS vulnerability when scripts can trigger it at will. So yes, WebGL does put much stricter requirements on drivers than, say, video games or compiz.

Or you could put the blame where it really lies, with Xorg. Nvidia's drivers work better because they basically replace half the Xorg stack. This state of affairs is rather retarded. If to get good performance and stability you have to replace half the underlying graphics stack, then the graphics stack must be the grand master of suck.

You would have better gpu drivers if Linus provided a stable ABI. But working with third parties has never been a goal of Linus anymore than creating a desktop OS. He also doesn't seem to care about creating a server OS that meets the needs of the market given how often the unstable abi has broken VMWare. Because having a stable abi in a Nix is just unthinkable Where are all the benefits from the unstable abi? How has Linux leaped passed other Nix systems?

No, sorry but you can't blame Xorg. It is an open source project and anyone can contribute. The quesion is why Nvidia does not contribute to Xorg instead of replacing its stack into its proprietary driver?

Xorg is an extremely complex piece of software but it is also extremely capable. MS Office has more bugs than Notepad. Xorg just need more developers and cooperation from hardware manufacturers. What if you manufacture a good card but the driver for it sucks? Your product sucks overall. Manufacturers need to put more effort into the software part on linux.

They will loose customers in the long run if they don't. It's overy complex, and has too many features that have no real place in todays computing environment. I use Linux everyday, and Xorg is the weak spot in the whole OS, it's slow, it crashes not often, but Windows 7 has never crashed on the same computer, nor did Vista. There is a reason that Red Hat and Ubuntu are looking at Wayland, and that is simplicity, reliability and speed.

An environment variable requires that. Especially since I read in one of you other comments that another related feature is switchable through about: I think you hit the nail on the head. X is extremely complex and extremely capable and so it can take an extreme amount of effort and time to have stable drivers. IMHO we should think about making X and or Wayland as simple and efficient as possible while still having relevant features.

Just to clarify I am not blaming this all on X, but complexity does not help. Really it's just because we're in a rush now and an environment variable switch can be implemented in 1 line of code while an about: But that was harder to implement due to where the GLX blacklisting is implemented. Eventually yes it'll be in about: The open drivers for both of those offer 3D as standard. The open drivers for NVidia only offer 'experimental' 3D, after much blood, sweat and tears of reverse engineering.

Phoronix is quite a good place to keep up. There are some things that are not easy to be talked about. I'll try to put the results of past conversations: A binary-only driver is very bad news, and should be shunned. That proprietary software doesn't respect users' freedom, users are not free to run the program as they wish, study the source code and change it so that the program do what they wish, and to redistribute copies with or without changes.

Without these freedoms, the users can not control the software or control their computing. Also, as Rick Moen said: In the article of http: Linux does not have a binary kernel interface, nor does it have a fixed kernel interface. Please realize that the in kernel interfaces are not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface.

The author of the article says that has old programs that were built on a pre 0. This interface is the one that users and application programmers can count on being stable. That article reflects the view of a large portion of Linux kernel developers: Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.

As an example, if some structures change on a new kernel release for better performance or more features or whatever other reason , a binary VMWare module may cause catastrophic damage using the old structure layout.

If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody should have source and can find somebody who is able to modify it to fit.

On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything.

On the downside, this forces Microsoft to maintain backwards-compatibility, which is at best time-consuming for Microsoft's developers and at worst is inefficient, causes bugs, and prevents forward progress. ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel with the already told long-term problems of proprietary software.

On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code. But is it the job of Firefox to shield from blatant security bugs in the underlying OpenGL API and neglecting the bugfree implementations in the process?

Rather more use and exposure would motivate the driver developers to fix their buggy drivers. Perhaps a blacklist could be implemented notifying the users that their driver is buggy and Firefox will run unaccelerated?

This would raise awareness without negatively affecting the "good systems". I see you have already implemented a blacklist: You are too quick to dismiss Xorg features as irrelevant.

They are relevant to many people. Wayland may be a nice alternative for you but it is still far from being as stable as Xorg. Xorg has problems but it has many strengths.

You would not be using it if it had more problems that useful features. For me there is no alternative to Xorg because I need network transparency. Yes, network transparency is relevant, today. When you are used to Xorg and NX this a huge step back. Gallium helps X too.

Removing drivers from X will greatly help X as it will mean much less code and make changing things much easier. It doesn't just make X alternatives possible.

Everyone is a winner. It's open source, someone else can do it if they can't. There is a graphics drivers problem, but it's getting much better and the future is bright gallium3d and friends , but even with what we have now, many many many applications manage to do OpenGL just fine on X even with the crappy closed NVidia drivers I must run, that crash X about once a month.

My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath. FreeNX proves that Linux can provide a proper, usable remote GUI environment, but holding on to this broken functionality is part of the problem with Xorg.

Xorg drivers are buggy. Buggy are the crap called GFX that are 1. We live in and graphics cards after all the technological advancements could not export a common hardware access API. It makes me wonder about the author's unfair and inaccurate description of the GFX situation in general. Why not have HW accelerated browsers in Haiku or Syllable. Because people would be involved in an eternal hunt for documentation.

The only answer is standards. There enough FPGAs out there to burn a standard driver in it. If you want my 2 cents 1. For example I buy a cheap standard 2D gfx card and a standardized accelerator board cheaper because it is more oriented on GP computing and weaker in 3D I want solve differential equation with octave on FreeBSD for example. If you want to be cheaper, buy only the first, let you 8core CPU do the rest. What we have now? Everything combined in a proprietary, non standard compliant uncompetitive manner and older vendors killed.

OSS is part of the global market and making drivers for special OSes is uncompetitive. Even the mighty windows need a vendor driver. There is always the cheapness factor. But would you sacrifice price for freedom and standards compliance? If yes, then , in my opinion, computing is not for you. I could not disagree more. Only Citrix does but poorly. With Xorg you can have an application server and administer your applications in a single place.

Just let your users connect and use their applications as it was local. They can resize windows, put them next to their local window, cut and paste, everything. It is integrated into their desktop. They don't need another desktop with poor quality graphics and scrollbars. FreeNX is nice but it does not replace Xorg either. It depends on it, it is a layer on top of Xorg. Think about it for a while. We develop so called RIA in php, javascript, jquery, java or.

RIA suck when compared to what they should be. The web was not designed for that. We should use X for that. People don't need network transparency, people need network access I assume that you have actual numbers of back this claim, as opposed to simply making things up, right? Sockets, files, module management, etc - Gilboa Edited If some simple tests supplied by WebGL's vendor can already lead to this result, I agree that WebGL should not be enabled by default for this chipset.

As jacquouille said, it's too much of a security risk. It is good to see that another additional to KDE big free software application provider is now running into driver bugs holding back implementations of current state of the art interfaces. They've been in this very situation for a couple of months now and might have data which could be useful to you as well.

I just tried beta 9 and well, either I get a completely black window or it flickers like Speedy Gonzales having an epileptic seizure. Not really what I would call useable: They call it lobbying. The more media presence the problem gets, the more chances on a speedier solution.

Squeekiest wheel gets the grease. I don't doubt there is a problem with it, what I'm saying is others manage. Worse case, do some message blaming the graphics card drivers. Others don't make such a fuss and manage.

My old OpenGL stuff just works. First of all, if an implementation is shown to be 'bug-free' then we'll gladly whitelist it in the next minor update. And yes, it is our job to shield the user from buggy drivers, buggy system libraries, whatever.

You don't want to have to wait for your OpenGL driver to be fixed to be able to use Firefox 4 without random crashes. That would be nice, but we also need to be able to ship Firefox 4 ASAP without lowering our quality standards. This is information of a very technical nature that most users won't know how to act upon. Actually I've gotten in touch with them already, asking for that http: Actually, Xorg developers have spontaneously contacted us and are looking into the driver issues we're having which they could reproduce.

Looking forward to un-blacklisting stuff in the future. Or did they contact because of something else? I can't really see OSNews having this kind of influence: In any case, that sounds like good news! Network transparency was useful when the machines you used everyday was not as powerful as they are today, when I first used an XTerm, back in the late 80s, X's network transparency was useful because it allowed both the local machine and the remote server to share the burden of displaying graphics, this was useful on resource limited systems.

Today, it's just added complexity, because in most cases, neither machine is resource limited, and therefore, you don't need the overhead of a remote X client, compared to a lighter access method such as ssh, or even RDP and VNC.

I don't know if it's because of this article, or because of the article that Phoronix is currently running on the same topic, or because of the various blog posts flying around the interwebs: The interesting part is that KWin did not run into driver bugs. The driver bugs ran into KWin. For that to understand one has to look to KDE 4. Yet in KDE 4. No, not because of KWin, the code in those areas was mostly untouched from 4.

Instead drivers suddenly said they did support features while this was not the case. And the only way around this for KWin is blacklisting which is a lot of work. So on a topic about a mature multi-platform browser that is having a big problem with Linux Developing Multi-Platform is very difficult especially if you cannot rely on the underlying platform or have cutting edge technology requirements or other platform dependent bits.

But that mostly a matter of performance issues. I mean if the outcome is the same then why is it a 'sad situation'? You'd think it would help, especially with Intel seeing as the X developers work for them. That makes absolutely no sense what so ever - the issue is layering WebGL on top of Direct3D and for the programmer who is programming for WebGL he doesn't care what happens under the hood and behind the scenes because all he is concerned about is the fact that WebGL is provided.

I think the whole sadness has to do with the fact that they have to maintain two separate back ends instead of a single one. Sorry to sound pathetic but boo-f--king-whoo. Its time that the Firefox developers stop writing their code for the lowest common denominator and started taking advantage of the features which operating systems expose to developers.

It really doesn't add that much complexity. Most of the issues that X is having these days have nothing to do with things like network transparency and everything to do with incomplete and buggy driver implementations, or bad GL frameworks Gallium helps, supposedly, but much work is left to be done. This sounds like a recipe to make everything slow and lowest common denominator.

I disagree, I think it's 20 years of legacy features and a broken design that is the problem, honestly, you can blame it on the drivers, but it can't be the drivers, because the proprietary drivers are much more stable on other OS's, makes me think that there might be other factors. I know it doesn't replace Xorg, but with such better replacements for that one complicated part of Xorg's functionality, it makes more sense to remove it and use a better alternative.

I forgot WebGL is involved. Sure, they exist, those other factors, but their importance is limited. The biggest limiting factors have been solved already with new acceleration architectures, for example. If you can name some limiting factors, that would be great. For now, you are just assuming that they exist because the Linux proprietary drivers are worse than the Windows ones. Well, the Linux drivers are not as well supported as the Windows ones given the difference in market share, so that there is definitely a reason.

Also, the drivers have chosen not to move to the newer X architectures, so they do not get the benefit of those. Bad news for who? They just want something that works. The current system already provides plenty of bad news. That proprietary software doesn't respect users' freedom Freedom as defined by Stallman's newspeak that only exists to push his agenda. On the other hand, Microsoft has made the decision that they must preserve binary driver Why does Microsoft have to be pulled into this?

Why not limit the discussions to Unix systems that have a stable ABI? Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle. FreeBSD keeps a stable abi with minor releases, so be specific and show in comparison how Linux has had an advantage. You can try answering the same question. How would have Linux been held back if they kept the abi on a three year cycle? X is no longer as network transparent as it used to be, unfortunately. It was perhaps the case 20 years ago when most of the computer graphics, font rendering etc.

Now we have xshm, xrender which enable reasonably fast client side rendering on local machines but no longer work across the network at least not if you care about the user experience. The network itself has changed too. Over the years bandwidth has increased dramatically but latency hasn't changed that much. Hard wired networks are now often replaced with wifi connections, VPNs and other ad-hoc networks.

But in most other applications, even if the program manages to start up properly, you still have to be very aware of the fact it is not running locally if only for performance and reliability reasons. Rdesktop and VNC chose different way: Thus, having remote session in a separate desktop is GOOD - it makes it easy to find out which application is running where. Having a possibility to disconnect from and reconnect to a remote session and thus move your own existing session between computers is GOOD.

Using protocols that benefit from increased bandwidth and don's stress the network latency asynchronous transfer of bitmaps, video is GOOD. Having additional features audio redirecting, file transfer is GOOD.

After all, with a network you can do much more than just open a window from a machine A on a machine B. From my -own- experience, I can't say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux. Actually, the availability of the complete kernel source makes Linux far easier - at least in my view Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers, I claimed, from my own personal experience that may or may not be relevant in the case of graphics card writers that this is -not- the case.

Graphics is a whole nother kettle of fish. As far as I know, writing a graphics driver involves writing multiple high-quality JIT compilers, a memory management layer, and a bunch of difficult-to-debug libraries. The statistic I heard and believe is that the NVidia driver on Windows contains more code than the all the other drivers on a typical system combined.

So they would have to put workarounds for every bug they find in a platform-specific graphics driver, right in a multiplatform web browser? Sounds out of place to me. Well, sure, it is doable, but I understand their decision not to do it. Thats why they don't want a stable kernel api. You think you want a stable kernel interface, but you really do not, and you don't even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree.

Everyone in this thread agrees that the proprietary nvidia drivers are the best. They are the only driver on my system that crashes, regulary.

They don't keep up with X developments, so you are left behind. I can't wait to not have to use them. Which Unix system support the most devices and architecture? In fact more then any OS, ever. Dude, really, read the doc, it covers all that you are bringing up. When you say "better alternative", are you talking about Citrix or are you still saying VNC is an alternative to X? If you are saying that VNC is an alternative to X, let me say it is not.

A lot of people use Citrix because VNC does not work for them. Obviously that is not your case because VNC is enough for you but I think your attitude is wrong. I use MS Office to wtite code and it is overbloated. I don't need all the formating crap and the page layout is useless for me. Does this mean MS Office sucks? It just mean it is not for me. OK, That was a hyperbole and maybe a very bad analogy but there is some kind of point in there. Why aren't you using the frame buffer?

The framebuffer sucks, I know. GTK used to work on it but not that good. It has a lot of problems but you are complaining about Xorg.

Xorg has problems and some bugs need fixing but you will find that the alternatives have their own problems. Maybe the best use of resource would be to fix those problems instead of removing stuff from Xorg that many people rely upon so as to make it like the frame buffer.

Anyway, your attitude "VNC is enough for everybody" sounds very wrong to me. I use Xorg extensively and my colleagues install Exceed or Xming on their Windows machine even though there is a VNC server on the server, because VNC is not adapted to our usage pattern. No, do like they do with plugins, separate process. With the open drivers, some one will try and fix it, with the closed, well lets hope they care enough about last year's device. Or, you know, look at the source of things that manage just fine In terms of graphics drivers, it has exactly the same problems that Linux has, for exactly the same reasons.

Those reasons are that Xorg is full of legacy crap that nobody uses anymore, which still needs to remain fully supported and no, I'm not talking about network transparency. This makes Xorg far more difficult to maintain and improve without breaking everything, and slows development down. Either because it's not had enough time spent on it, or because it interacts poorly with the legacy crap.

Not having a stable ABI doesn't hurt the open-source side of things. So, the only group it could possibly hurt are the closed-source guys. That'd be Nvidia and ATI, basically. Let's see what Nvidia have to say The Linux-specific kernel module is tiny.

So, Nvidia don't seem to think it's a problem. I think they'd know better than you do. As for other drivers I don't see the problem. Nearly everything in a modern PC will run just fine with no special drivers. On Windows, you use Microsoft's drivers, on Mac OS X you use Apple's drivers and they even work on general PC hardware with few problems , and on Linux you just use the standard kernel drivers.

The only exceptions are printers, video card drivers, and wireless network drivers. Printer drivers are user-space even on Windows these days , so the question of a stable kernel ABI is irrelevant. As for wireless network cards The in-kernel drivers for wireless devices kick the ass of any vendor-supplied Linux driver, or of the Windows drivers running through NDISWrapper. One other point - remember the problems Microsoft had with third-party drivers on Windows?

How much trouble lousy third-party drivers caused? To solve this problem, Microsoft had to develop a huge range of static test suites, and a fairly comprehensive driver testing regime. They then had to force hardware manufacturers to use these tools and certify their drivers, by adding scary warnings about unsigned drivers. Later on, they even removed support for non-certified drivers entirely. The Linux community can not do that, for a whole heap of licensing, technical, and logistical reasons.

Plus, we don't have the money, and we don't have the clout to force hardware manufacturers to follow the rules. So they won't - they just won't release Linux drivers at all.

Unless you advocate supporting only a subset of WebGL, the part which doesn't crash on the currently used drivers. Then we simply don't agree. We've had too much partial web standard support in the past, I think. Wine is one of the things that manage, and for OpenGL it will probably do very little bar pass it on.

Crashes in Wine are normally because of the nature of it, i. That what I think anyway, don't know of any real data on this. You just have to think long-term. Something that forces users to depend on a company which has the target of sucking the biggest amount of money from them You know, Bill Gates got to be the richest man and Microsoft got to be a convicted monopolist at least three times.

Also when Nvidia stops mantaining a driver in Windows, Linux, etc we start seeing what happens, so we have to think long-term. If there are problems in this thread with elementary facts, imagine if we start speculating. Performance is an issue, too.

Sure, those programs don't know about it, but the call translation overhead results in very poor performance in the end. And THAT they care about. Chrome is consistently more responsive than FF on any computer that I've used I suspect that it is thanks to its multi-process design.

That's probably why the GP said that and I agree with him. There's a major difference between both: I just thought about KWin because their problems with driver status also had made quite some waves, but indeed their needs don't compare to yours a lot.

How about projects which use OpenGL more than for compositing? As I already said, there's a difference between unresponsive and being bloated. Not all lean software is responsive. A single-threaded design where UI rendering is on the same thread as the number-crunching algorithms like firefox's one, though thankfully they're working on that is all it takes to make software unresponsive, no matter how well the rest is coded. What are you talking about? I'm not talking about what people want, I'm talking about technology, and what is better.

It's not about numbers, it's about usability, and Xorgs network transparency is not usable, compared to the alternatives, its a very inefficient way to provide remote access, which is what people need. I don't think anybody really cares the exact way it works, as long as it works, and Xorgs network transparency is not usable, as a reliable tool for network access.

But even if his word usage was wrong it's hard to argue that "it feels like crap after a while", regardless of how well it run all the crap which make it run like a turd. Even the ancient VESA driver is faster and more stable the NVidia drivers when it come to 2D graphics, the nouveau driver is somewhere between and x times faster while using times less memory XOrg with nvidia: They never invested the same effort as into DirectX drivers. Nvidia on the other hand produces decent OpenGL drivers across all platforms.

Still, this whole situation is a mess. This sounds like FUD. Bridling it with some committee-derived standard would be extremely hurtful to the companies involved and mostly unnecessary anyway. They already provide drivers for the platforms that matter and since they can control the card and the driver, they can develop at a much faster pace. By the way, there already is a standard interface and it's called OpenGL.

DirectX would count too. Adding yet another layer is just bloat and unnecessary. Of the bunch, for your usage pattern, Citrix is useless. FreeNX is the best performing one and that is because it makes use of X. You will find that people are paying good money for Citrix though. Maybe "the people" care about desktop integration, despite what you think.

You don't usually bother making a driver blacklist for a game. If a game crashes because of a driver, so be it. Of couse, this is open source: I can't say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux. Write a binary driver for Windows and it will work for the life of the system. Write one for Linux and it will likely be broken with the next kernel update. Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers No I didn't claim that.

I claimed that Linux drivers would be better if it had a stable ABI. There is a difference. Hardware companies would producer higher quality drivers and in a more timely matter if there was a stable abi. This is partly due to IP issues and companies wanting to get drivers out on release day.

It's a needless restriction that has held back the Linux desktop, especially during the XP days. That single decision has helped Windows keep its dominate position. Modding down the parent comment That way reasoning is avoided? FreeBSD does not have even close to the same desktop marketshare or mindshare as Linux and as such does not get the same amount of attention from hardware companies.

The point of bringing up FreeBSD is that it has had a stable abi for minor releases and yet no one has told me how Linux was able to leap ahead in terms of specific features that could not wait a minor release cycle.

Your link doesn't work. So you cherry picked a few positive quotes. Actions speak louder than words and by their actions they clearly prefer to release binary drivers for stable interfaces. Users prefer binary drivers to having an update break the system. Users just want something that works. Wait so you are saying everything else works fine in Linux? What about webcams, sound cards and bluetooth?

No complaints about Audigy then? The question is obviously related to video card drivers and most of your long winded post is irrelevant. I asked a simple question that you haven't been able to answer. Bullshit, I can list numerous network cards that have excellent customer ratings. Intel cards especially have been stellar for me. No I don't recall that actually. Comment 12 Daniel Holbert [: Comment 13 Daniel Holbert [: Comment 14 Daniel Holbert [: Comment 15 Jonathan Griffin: Comment 16 Daniel Holbert [: Comment 17 Daniel Cater Comment 18 Jonathan Griffin: Note You need to log in before you can comment on or make changes to this bug.

Graphics show other bugs. Nobody; OK to take it and work on it. Attachments Black rectangle View All Add an attachment proposed patch, testcase, etc. Always Steps to Reproduce: Which test s render with a black rectangle, and which render with the black bar? Jonathan, any ideas on how for us to tell which test failed and how? So, to answer my question in Comment 9: So this seems to be a bug in Grafx Bot -- it's running up-to-date code against stale reftests.