Exploring Twitter API 2.0 (Free Plan)


I’m familiar with the Twitter APIs and have built sentiment analysis, sensitive suicide prevention bot with its standard API (1.1) in the last few years. I sort of know how Twitter APIs work, or so I thought.

Sensitive Suicide Reporting Bot for Twitter

2023: Twitter API 2.0

Fast forward to April 2023. Twitter Dev made an announcement that it will take down its Standard API 1.1 version on 29thApril 2023. The paid Twitter API plans are either exorbitant ($42000/month for enterprise-plan), or they are of little use to the small tool makers ($100/month basic-plan for 3000 post calls/month).

Twitter still offers free-plan for its API version 2.0, but it mentions “write-only” access with cap of 1500 calls/month. This was confusing because often write access is not without the read access. In fact, the former is inexpensive and gets included in the most free plans.

Since I couldn’t get any explanation on the read-access for twitter with the free plan, I decided to try free plan myself to figure it out. As of today (20th April 2023) – I still don’t find any publicly available document, blog (including tweeter’s own community) that explains what all APIs, features are included in the Twitter API 2.0 free-plan. So here I go.

Twitter API 2.0 (Free Plan)

I wanted to build a simple CLI tool with Twitter APIs that will track a few growth metrics for its user. I had planned “use your own API key” approach to distribute this standalone tool. For my initial PoC, I built a simple tool with Twitter API 1.1 that created an Excel report for a single use case. It worked great with my keys/tokens from API 1.1 and I got some good feedback for the report my tool MVP generated.

Next step for me was to get this tool working with API 2.0 on my free plan (since I wanted people to use their own keys/tokens from their free plan). To my surprise (euphemism for horror), I found that the free-plan allows only 3 API endpoints – yes only three. Moreover, one of them only gets your ID, name & username from the twitter – how useful, no?

Here are those three API endpoints from the free plan for Twitter API 2.0 –

  1. Create a tweet: POST /2/tweets
  2. Delete a tweet: DELETE /2/tweets/:id
  3. Lookup yourself (me API): GET /2/users/me

Interestingly, the third free-plan API doesn’t allows you to query any other user, you can only query yourself (me API). I couldn’t get details for any other user with my (limited) experiments with that API. 🤷🏽‍♂️

To understand this limitation better, see the screenshot from my twitter developer account below. This screen sums up what all you can do currently with Twitter’s API 2.0 in the free-plan.

Twitter API 2.0 – Free Plan access


Considering the Twitter APIs available in the free-plan, it is quite evident that this plan is of little use to developers, tinkerers who have built some useful tool, bots for Twitter over the years with free APIs. I don’t know if anyone could do something useful with this free-plan for Twitter API 2.0 (login-with-twitter is the only useful use case I see here).

I can completely understand the frustration of indie hackers, twitter tool makers with the new API pricing and why they are choosing to shut down their tools, bots these days. 😦

You can compare the Twitter API 2.0 pricing plans for features, APIs, access levels they offer and decide yourself if any one of them is suitable for you.

Lastly, a small dose of laughter (and truth) from Twitter itself –


OpenCV on Android : Part 2

Detecting filled-in bubbles with computer vision

The code

This blog post delves in the actual image processing and computer vision part of the Kotlin code. If you want to read setting up OpenCV or understand the rationale behind some of the decisions in this code, please read – OpenCV on Android : Part 1.

Image Processing:

The basic image processing is a pretty standard code that involves reading image in grey-scale, Canny edge detection and finding contours –

 val resultMat = Mat()
 val grayImg = Imgcodecs.imread(imgFilePath, Imgcodecs.IMREAD_GRAYSCALE)  //load in grayscale
 val edgesImg = Mat()  //for edge detection
 Imgproc.Canny(grayImg, edgesImg, lowThreshold, highThreshold)  //edges
 // thresholds and other params .....
 val contours: MutableList = ArrayList()

BTW, it is absolutely critical that you release any Mat() used in your code as soon as possible, otherwise your app will run low on resources pretty quickly. That’s one big gotcha in the Android implementation of OpenCV.

Computer Vision

The computer vision part of this app captures the paper MCQ form (bubble-sheet) with the main phone camera, processes it with OpenCV to detect all the bubbles from the MCQ sheet, detects the filled-in bubbles and maps them to relevant numbers or options to populate the UI form. Essentially, this Android app converts the paper bubble-sheet into an UI form for further processing.

Here are the relevant code snippets –

Detecting the bubble-sheet

//sort contours by their size to find the biggest one, that's the bubble-sheet
val sortedContours = contours.sortedWith(compareBy { Imgproc.contourArea(it) }).asReversed()
val approx = MatOfPoint2f()

for (i in sortedContours.indices) {
    val c = sortedContours[i]
    val mop2f = MatOfPoint2f()
    c.convertTo(mop2f, CvType.CV_32F)
    val peri = Imgproc.arcLength(mop2f, true)
    Imgproc.approxPolyDP(mop2f, approx, 0.02 * peri, true)

    if (4 == approx.toList().size) { //rectangle found
        Log.d(TAG, "Found largest contour/answersheet: ${approx}")
//largest rect found, extract & transform it and crop answersheet properly
val answerSheetCropped = transformAnswerSheet(approx, grayImg.clone())
approx.release()  //free memory

The transformAnswerSheet() fixes orientation of the rectangle making it vertical without requiring user to capture it perfectly.

The image below shows how the bubble-sheet is detected –

Detecting the bubble-sheet
Detecting the bubble-sheet with Computer Vision

Detecting all the bubbles

The bubble detection has few important points –

  • It is better to detect boundingRect() of the bubble/circle,  detecting circles may not always succeed due to errors in photocopying the bubble sheet, and more importantly while filling in the bubble, the user may not fill it strictly within the circle – all these cases work better when binding rectangle is detected with approximated aspect ratio (between 0.9 and 1.1).
  • It is important to assess values of bubbleDiameter  and bubbleFillThreshold proportional to the device resolution and image size. This was done by examining these values for various device configurations.
bubbleDiameter = 2 * getEstimatedBubbleRadius(answerSheetCropped.width())
bubbleFillThreshold = getBubbleFillThreshold(bubbleDiameter / 2)

//proceed with binary image now
val binaryImage = Mat()

val sheetContours: MutableList = ArrayList()

//find MCQ bubbles in the sheet
for (sc in sheetContours) {
    val rect = Imgproc.boundingRect(sc)
    val aspectRatio = rect.width / rect.height.toDouble()

    if (rect.y > bubbleStartY && rect.width in bubbleDiameter..(2 * bubbleDiameter)
        && rect.height in bubbleDiameter..(2 * bubbleDiameter) && aspectRatio in 0.9..1.1) { //CIRCLE

This small piece of code actually draws the circles over all the bubbles on the original bubble-sheet image.

//draw ALL bubbles on the answer-sheet
Imgproc.cvtColor(answerSheetCropped, resultMat, Imgproc.COLOR_GRAY2RGB)
Imgproc.drawContours(resultMat, bubbleContours, -1, Scalar(0.0, 255.0, 0.0), 5)

The image shows detection of all the bubbles (also shown in the app)  –

Detecting all the bubbles with computer vision
Detecting all the bubbles with computer vision

Detecting filled-in bubbles

To detect a filled-in bubble, we check each bubble on the binary image (while masking other bubbles) and see if its pixel density is higher than the threshold (bubbleFillThreshold), which itself is less than the area  πr² , where ‘r’ is radius of the bubble captured on the device resolution. This operation with OpenCV is resource intensive and takes some time – it would be much more resource efficient, and faster with vectorized Python code, but then I had to work within the limitations of Android platform. Anyway.

Here is the code-snippet that gets filled-in bubble index in a row –

private fun getFilledBubbleIndex(binarySheetImg: Mat, bubbleRow: MutableList, bubbleFillThreshold: Int): Int{
    var filledBubble: Pair? = null  //(index, pixelDensity)
    for(j in bubbleRow.indices){
        val mask = Mat.zeros(binarySheetImg.size(), CvType.CV_8S)  //mask to see filled-in bubbles
        val bubble = bubbleRow[j]
        Imgproc.drawContours(mask, listOf(bubble), -1, Scalar(255.0,255.0,255.0), -1)

        val outMask = Mat()
        Core.bitwise_and(binarySheetImg, binarySheetImg, outMask, mask)
        val nzPixels = Core.countNonZero(outMask)


        if(nzPixels > bubbleFillThreshold){ //detect filled bubble
            if(null == filledBubble){
                filledBubble = Pair(j, nzPixels)
            }else if (nzPixels > filledBubble.second){ //darkest bubble of teh row              
                filledBubble = Pair(j, nzPixels)            
    return filledBubble?.first ?: -1 //filled bubble not found?

Once index of a filled-in bubble is detected by computer vision, it can be easily mapped to the relevant number (0 to 9) or option (a, b, c, d,..etc) for further processing. Once the relevant data is extracted from the bubble sheet, it is a straight-forward Android app that can handle this data as desired.

The binary image below shows how the filled-in bubble detection works internally with computer vision. This image is never shown in the actual app, I am adding it here for better conceptul clarity –

Detecting filled-in bubbles with computer vision
Detecting filled-in bubbles with computer vision

I hope this explains and confirms that it is possible to build an OMR computer vision app using OpenCV on Android. Although, I’d once again reiterate what I said at the beginning – if possible, build OpenCV solutions with Python, that will save you lot of headache. 🙂


OpenCV on Android : Part 1

Detecting all the bubbles with computer vision

Starting with why?

We had a rather peculiar problem – we wanted to empower ground-level volunteers to scan MCQ sheet (a simple MCQ form as a bubble-sheet – for surveys or exams) without any expensive OMR machine and we wanted to explore if it would be feasible to use their existing smartphone as an enabler. Moreover, given their remote rural or semi-urban locations, we didn’t want to assume uninterrupted Internet/data connectivity either, so using online APIs wasn’t feasible.

Simply put, we wanted to use computer vision offline on an average Android phone with its camera. I used Kotlin for programming instead of Java because I wanted to learn Kotlin. Here is an article in two parts explaining my learning – technical as well as non-technical.

  1. First things first: If you can, use Python for any computer vision project. It is undoubtedly a much better choice with OpenCV library. Availability of the packages such as NumPy or pandas makes it the preferred technology stack for most AI/ML projects.
  2. Android will present you with tons of client side compatibility issues – different devices will throw different problems, and you cannot ever test it on all physical devices. It remains a formidable challenge, and at times you have no other option but to give up supporting few devices.

With both these disclaimers/caveats out of the way, let me get to the good news. YES, computer vision worked successfully on Android devices. We built a simple, ‘Don’t make me think’ app to scan the MCQ forms using mid-range Android smartphones.

I should also add that this is NOT a generic OMR app to read any bubble-sheet, I have built this app for a specific problem that I have discussed above.

Here is a short demo video of computer vision in action from this app –


I am just listing major steps here. I cannot share the complete code for this app, but here are few Kotlin code snippets with my comments that should offer enough insights to use OpenCV library for Andorid devices.


Since most modern phones have array of different cameras, and have tons of features/filters to manipulate the photos, I didn’t want the end users to struggle themselves with exposure, focus and so on to get the right image for processing. Likewise, I didn’t want the user’s default camera settings, filters to affect the image. So I decided to handle the device camera with auto-focus and required settings by the app itself without requiring the end-user to do all focus, exposure settings for a clear image (‘Don’t make me think’ approach again).  Within these constraints, I decided to go ahead with Android’s Camera2 API as it offers quite a good granular control over the camera hardware. You can get sample code for this from Android’s official GitHub repo to understand how it is implemented. Most of the other articles I have found online for Camera2 API discuss it with Java code, which I didn’t use. Kotlin offers a much cleaner and efficient way of building Android apps including useful Kotlin co-routines.

For the rest of this article, I am assuming that you’re well-versed with Android lifecycle concepts, its challenges and importance of keeping the main UI thread responsive at all times. Keep the user informed through small messages about the process so they are not lost when a time-consuming operation is inevitable.

Setting up OpenCV:

This is crucial. Most Android devices that ran into hardware or driver issues couldn’t initialize OpenCV properly. This needs to be done in the Android application itself so that OpenCV works fine in all the activities/fragments that need to use OpenCV APIs. This is a code snippet for initializing OpenCV in Android application –

override fun onCreate() {
   //other initializtion...
private fun initOpenCV() {
    val engineInitialized = OpenCVLoader.initDebug()
    if (engineInitialized){
        Log.i(TAG, "The OpenCV was successfully initialized in debug mode using .so libs.")
    } else {
        initAsync(OpenCVLoader.OPENCV_VERSION_3_4_0, this, object : LoaderCallbackInterface {
            override fun onManagerConnected(status: Int) {
                when(status) {
                    LoaderCallbackInterface.SUCCESS -> Log.d(TAG,"OpenCV successfully started.")
                    LoaderCallbackInterface.INIT_FAILED -> Log.d(TAG,"Failed to start OpenCV.")
                    LoaderCallbackInterface.MARKET_ERROR -> Log.d(TAG,"Google Play Store could not be invoked. Please check if you have the Google Play Store app installed and try again.")
                    LoaderCallbackInterface.INSTALL_CANCELED -> Log.d(TAG,"OpenCV installation has been cancelled by the user.")
                    LoaderCallbackInterface.INCOMPATIBLE_MANAGER_VERSION -> Log.d(TAG,"This version of OpenCV Manager is incompatible. Possibly, a service update is required.")

            override fun onPackageInstall(operation: Int, callback: InstallCallbackInterface?) {
                Log.d(TAG,"OpenCV Manager successfully installed from Google Play.")

A small note – you might run into this error in your logs despite OpenCV getting initialized properly.

E/OpenCV/StaticHelper: OpenCV error: Cannot load info library for OpenCV

Please IGNORE this error message – ‘Info library’ is used for special Android configurations, like builds with CUDA support. It is explained by an OpenCV contributor himself.

I’ll explain the image processing and computer vision part of the Kotlin code in the next blog post.