From what I dig so far it looks like dual Arc A770 is supported by llama.cpp. And saw some reports that llama.cpp on top of IPEX-LLM is fastest way for inference on intel card.
On the other end there is more expensive 7900 XTX on which AMD claims (Jan '25) that inference is faster than on 4090.
So - what is the state of the art as of today, how does one compare to another (apple to apple)? What is tokens/s diff?