Sections

2019-11-18

AMD's "recommended" GPU chart

This article is somewhat of a sequel to "Intel's Retarded Chart".

It always challenges my inner low-spec (nV GF2 MX 200 & Intel GMA 3150) childhood gamer and demoscener when someone says I need their exact expensive thing to do something, or do something well. My brain refuses to believe these claims, after all I've heard some of the best songs composed on 4 channels of Amiga's Paula, and seen IBM PC 5150's previously undiscovered capabilities in 8088 MPH.


So what do we have here? Apparently we need at least RX 5500 to do some "good" 4K or ultrawide gaming. What do you call "good 4K gaming"? I myself consider "good" to be 30+ FPS at a very wild mix of settings, because I hate anti-aliasing, motion blur and chromatic aberration, so I put the postprocessing settings on medium at most.

And what if I told you I play PUBG Lite on mostly Ultra on my aging nV GTX 660 on triple 1200p monitors (6144x1200) in Surround and aside from typical lags, it gives off over 60 FPS? Recently my confused driver made it possible to do 12Kx2400, albeit at like 2 FPS with a borderless mode bypass, because the game arbitrarily decided to support only 720p to 4K. Not even 800x600 or 960x540, even if it's said to be for low-end devices. And the FOV is restricted to 103° or 0,57222*pi radians, while I've set the monitors for 135° or 3*pi/4 radians.

Minecraft too isn't looking too shabby, mainly because I can devote up to 24 GB of 4channel RAM to the JVM. Back when I got this GPU, I was amazed that it could run modded 1.4.7 at single 1920x1200 at over 80 FPS. Which indicates how crap the previous NVS 290 and ION really were. They could play 1.2.5 at over 30 FPS but after singleplayer removal the framerate tanked into single digits and didn't ever recover with subsequent bloat in later versions. And then there was the GMA 3150 in my Atom netbook. I remember modded 1.2.5 at lowest settings being somewhat acceptable, but lately on Windows anything newer than 1.7.3 crashes because of bad OpenGL support (1.4), and on Linux with the Mesa driver it works, but looks like a slideshow. And it's commonly believed Minecraft is a CPU-intensive game, because of Java. Why do they keep adding new features and changing existing ones when nobody asked for it (no cave update yet) instead of optimizing the game, so that it can run as good as 1.2.5?

Another eternal classic game is Quake 3 and its FLOSS clone OpenArena. If you find a well-designed map, it looks almost as good as PUBG Lite and yet runs well even on GMA, at least the original Q3. Don't know what they did to ioquake3 over those 20 years that some OA maps are somewhat laggy. For those on the more indie side, Cube or Sauerbraten is/has a great low profile engine. No one really should care about UE4 or Unity, when you have that, and also ZDoom and Build.

According to a very inaccurate SEO-spam site Userbenchmark, the GTX 660 is about the same effective speed as RX 460, only about 6x slower than RTX 2080 Ti and 4x slower than RX 5700 XT, while being 30x faster than ION and 38x faster than NVS 295, the graphics chips I upgraded from. Yet it's only 37% faster than Vega 11, so no groundbreaking progress has happened in dGPUs since 2013 and nV just keeps charging obscene amounts of money. I'm still waiting for about 30x effectively faster card at a similar price point of about $300. That means where I get 20 FPS, I can get 600 FPS assuming no CPU nor bus bottlenecks. So I can put it in the next PC where I upgrade everything else. And by no way I'll buy a new GPU every other year, that's just bullshit, I don't print nor steal money. For futureproofing only Vega VII makes sense because of huge and fast VRAM for a giant framebuffer, while not being obscenely expensive like Quadros or Titans.

Since recently I have started to hate AAA games. They are so large, all my HDDs are full. How can anyone just release an apparently uncompressed ~100 GB thing and expect people to download it when there are still people with data caps? Where are the 4K Blu-Rays? And then it needs to install, that's another 100 GB needed. And then there come updates, bloating the size even more. Thank the warez guys for repacks.

The chart says that with about a 550 to 560 card (where 460 sits), I can't do good Full HD AAA gaming. Wrong, Kingdom Come has 30 FPS at medium-ish settings at 1920x1200. But I've become too spoiled with 60 FPS so I played it on 1152x864 or something like that.

To be fair this chart isn't as inaccurate as Intel's for their CPUs, where you can OC an old Core 2 Quad and it works just fine for the most part. The biggest issue in this AMD one is the "ultrawide gaming". I can add some 720p or 480p resolutions to my Surround and hell does it go fast. I can't test VR. 4K is like unoptimized 5760x1200, which I've sorted out. It's subjective what you call "smooth" gaming, and it depends on the particular game. Swift shooters need 120+ FPS (and no smeary LCDs), while turn-based games don't need even 20 FPS, and even then it's about how responsive the UI gets. I don't care about game streaming, the input lag will be just crap, how can anyone be so hyped about this? There's SSH already.


"Ultimate [resolution] Gaming" - 2020 update

At CES 2020, AMD presented that RX 5500 is for 1080p gaming, 5600 for "Ultimate" 1080p gaming, and 5700 for 1440p gaming. Where's my triple 1920x1200 surround gaming on almost a decade old GPU from the (only) competition? In another article I have analyzed how big framebuffer is needed for some ridiculous resolutions at that time. If I do that again, I get that at 8 GB you can have triple-buffered 32K, that's 30720x17280, at 2,12 GB per buffer (6,36 GB altogether), assuming 4 bits per pixel (RGB10A2 or RGBA8), with extra 1,5 GB left for textures and what not. My setup would consume 20,7 MB even when triple buffered, which barely makes a dent in my 2 GB of VRAM.

Moreover, the resolution-ness of a GPU decreases over time. The "1440p" RX 5700 may become a "720p" card in 2025. The Intel "HD" Graphics was good for 720p30 maybe at the times of PS3 and XB360 games, but today it's more like Intel 360p Graphics, and putting a U before HD doesn't increase performance. Same can be said for my GTX 660. In the summer of 2014 I could play at 1920x1200 at all maximum settings safe for some blurring/smoothing crap, and in the spring of 2020 I have to occasionally lower resolution to some tweakmode of my own devising, like 960x600 or 1152x720, and it doesn't even bother me as I have 2 dioptries and the AAA games requiring such compromises don't fit on my HDDs anyway.

2019-11-02

Goniometric Synthesizer

Instead of working with knobs or searching for a sample, you could just enter the formula, offset, and speed, and make music more mathematical way. It's possible and mathematically proven to make any periodic sound using Fourier series, which is essentially infinite stacking of sines and cosines. Both our ears and equipment are finite, so we don't even need to stack infinitely. On my way to code a natively microtonal DAW, I ought to start with smaller bits and pieces. Coding it all at once is impossible for me.

The following QBASIC program was used to generate the initial waveform for SuperTan samplepack. This pack mostly contains this waveform variously layered and detuned, and then exported using OpenMPT.

---------------------------------------------------------------------------------

PRINT "Gsys Goniometric Sampler vO.2 - May 2015"
INPUT "XScale:"; c
INPUT "Sampledepth:"; d
INPUT "Color"; e
INPUT "InvYScale"; f
SCREEN 12
OPEN "gosawave" FOR OUTPUT AS #1 LEN = 640

FOR b = 0 TO 639
    a = TAN(b / d)
    LINE (b * c, a * (-240 / f) + 240)-((b + 1) * c, TAN((b + 1) / d) * (-240 / f) + 240), e
    g = INT(a * 256 / f + 128)

    IF g > 255 THEN
        g = 255
    ELSEIF g < 0 THEN
        g = 0
    END IF

    PRINT #1, CHR$(g);
    OUT 544, g
NEXT b
SLEEP

---------------------------------------------------------------------------------

Another program was written 4 years later when I heard how prospective R is. I knew it resembled Python and C enough that for starters I could resurrect my knowledge from the freshman courses 2 years before, and then explore optimization and the more R-like ways. Notwithstanding the Python popularity has gone through the roof at the expense of R since then. Normies learn and forget quick, we Aspies don't do that. This gives me confidence that some 40 years later I'll do what I now revere as magic and probably get paid bitcoins (fiat currencies suck in the long term). The program outputs the waveform in a series of double precision little-endian numbers without any header. You then import it in Audacity, cut as needed, and export to a format your DAW understands. No sample pack from this yet, but I have a name already: Fat Authentic Goniometric Synths.

---------------------------------------------------------------------------------

library(parallel)

cl = makeCluster(detectCores())

fourier = function(x, N = 100000, P = 2*pi, a0 = 0){
  # an, bn - cosine and sine coeficients, set inside loop, dependent on iteration variable
  ret = a0/2
  n = 1 # iteration start
  while(n <= N){
    if(n %% 2 == 0) {
      n = n+1
      next
    }
    an = 1/(pi*n)
    bn = 4/(pi*n)
    ret = ret + (an*cos(2*pi*n*x/P) + bn*sin(2*pi*n*x/P))
    n = n + 1
  }

  # somehow actually slower
  #ret = sum(a0/2, unlist(lapply(1:N, function(n){
  #  an = (-1)^(n)/(pi*n)
  #  bn = 2*n*(-1)^(n+0)/(pi*n)
  #  ret = (an*cos(2*pi*n*x/P) + bn*sin(2*pi*n*x/P))
  #  return(ret)
  #})))

  # apparently ret can be NA even if it was assigned a value few lines above
  if(!is.na(ret) && (ret > 1000 || ret < -1000)) {
    return (0)
  }
  #print(ret)
  return(ret)

}

x = seq(0, 4*pi, by = pi/96)
#x = NULL
#i = 0
#while(i < 4*pi){
#  x = c(x, i)
#  i = i + pi/24
#}

# this is very noob tier
clamp = function(x, bottom, top){
  if (!is.na(x) && x > top){
    return(top)
  }
  if (!is.na(x) && x < bottom){
    return(bottom)
  }
  return(x)
}

beginning = Sys.time()
output = parallel::parLapply(cl, x, fourier)
theend = Sys.time()
stopCluster(cl)
# stops working due to trying to set y scale to have infinite range, how stupid
plot(x, output, "l", main = "Function", sub = "a wonderful one", xlab = "argument", ylab = "value", asp = 0.5, xlim = c(0, 13))
print(output)
ofile = file("fourier.raw", "wb")
i = 1
while(i <= length(output)){
  writeBin(output[[i]][1], ofile)
  i = i+1
}
close(ofile)
print(theend - beginning)


#plot(0,0)
#i = 1
#while(i <= length(x)){
#  points(x[i]/10, fourier(x[i])/10, col = "black") # doesn't appear to do anything
#  i = i + 1
#}

# determine where to put a dot for each pair of argument and output
#scalex = 1/pi
#scaley = 1/10
#centerx = cols%/%2
#centery = rows%/%2
#i = 1
#results = NULL
#while(i < length(x)){
#  results = c(results, fourier(x))
#  idx1 = centerx + scalex*i
#  idx2 = centery + scaley*results[i]
#  if(!is.na(idx1) && !(is.na(idx2)) && idx1 > 0 && idx1 <= rows && idx2 > 0 && idx2 <= cols){
#    fb[idx1, idx2] = "@"
#  }
#}

---------------------------------------------------------------------------------

Now I should seek and learn some good audio library to have some way to output the sounds I'm generating. Old UNIX style piping to /dev/dsp doesn't work on all systems. It would be a good idea to support multiple sound APIs because one does not simply know whether this dependency will be available.