• # Count Safe Cells On Board With Hostile Bishops Using Knowledge About LLM's

Hello, checkiomates๐ฑโ๐ค!

We try to actively develop our presence in popular social networks and will never get tired of reminding you and inviting to our pages at Instagram and Twitter!

In the digest we are going to get briefly familiar with operation of large language models (LLMs) and try to avoid hostile bishops on a board.

๐กTIP

At your profile, after clicking on big percent number of your progress, you may see module and methods you have already used in your shared solutions. If you want to discover all CheckiO features, visit our tutorial. It's a longread, but it's worth it!

๐MISSION

The generalized square chessboard has been taken over by an army of bishops, each bishop represented as a two-tuple (row, col) (0-base indexing) of the coordinates of the square that the bishop stands on. Given the board size n and the list of bishops on that board, count the number of safe squares that are not covered by any bishop.

```safe_squares(10, []) == 100
safe_squares(4, [(2, 3), (0, 1)]) == 11
safe_squares(8, [(1, 1), (3, 5), (7, 0), (7, 6)]) == 29```

๐ARTICLE

Miguel Grinberg's article, "How LLMs Work: Explained Without Math," provides a clear, non-technical explanation of how large language models (LLMs) like GPT-2 and GPT-3 operate. The article demystifies these models by explaining that they function primarily by predicting the next word (or token) in a sequence based on the input text.

๐ฉโ๐ปCODE SHOT

How do you think, what the following code does?

```from re import sub

def ????????(line: str) -> str:

s = sub(r'(.)\1', r'', line)
if s == line: return line
return ??????????(s)
```